In recent years, AI-driven image editing has emerged as a promising field with numerous applications. This work explores the capabilities of the generative AI model, Diffusion, for image editing guided by reference images. We focus on leveraging a set of images that outline the desired editing features and applying them to a target image. By experimenting with various hyper-parameters, modifying the core components of the Diffusion model, and integrating the CLIP model, we demonstrate various improvements in image editing performance. This paper details our methodology, presents results across different configurations, and discusses the overall potential of the system for image editing tasks.