Project 4A: Image Warping and MosaicingΒΆ
BackgroundΒΆ
In this project, I create image mosaics from multiple photos. I start with first shooting and digitizing the pictures. Then, I can compute the homography and warp the left image to the space of the right image. This then allows me to create a canvas, and place both images on it, thus creating a mosaic. For the overlapping regions of the image, I also used alpha blending to get a nice transition without sudden, sharp edges.
Shooting the PicturesΒΆ
I began by taking pairs of pictures for different mosaics I wanted to create. When taking photos, I kept the COP constant by keeping my camera position the same and rotating my phone around my camera's axis. The images I took are shown below. The first 3 are for rectification, and the next are for mosaicing.
Recover HomographiesΒΆ
In this step, I write the function computeH(im1_points, im2_points), which recovers the homography H mapping im1_points to im2_points. To find H, I use the the equation p' = Hp, where p and p' are correspondences from the images. Since H_3_3 is a scaling factor and set to 1, this means if we have enough equations of the form p'_i = Hp_i, we can solve for the 8 entries in H. Each correspondence gives us 2 equations, so we need at least 4 pairs of points. However, to be robust to noise, I select more correspondences (around 15-20), set up a linear system to solve for entries of H, and then use least squares to find the H matrix.
Warping the ImagesΒΆ
In this part, I implement the warpImage(im, H) function, which uses homography H to warp im. This had a similar implementation to the inverse warp in project 3. However, I did add logic for figuring out the new image's size. The overall outline for this part was to first pipe the corners of the image through H to figure out a bounding box for the final image. This bounding box was used to create the destination grid of points we wanted values on. Then, using inverse warping and interpolation, we could determine the pixel values on this destination grid. There were several pixels with no value, as the larger dest_grid size meant that they wouldn't have a corresponding source point. In this case, I marked the pixels as black.
Image RectificationΒΆ
Here, I ensure that the computeH and warpIm are working as expected, before going on to use them for mosaicing. In order to do this, I used 3 example images where I knew about the geometry of the objects. Since homographies can map any quadrilateral to any other quadrilateral, my first test was to map some arbitrary quadrilateral to a square. Then, I tested warping an image of my macbook taken from an angle to a top-down view (ground-plane rectification). Finally, I tried rectifying a photo of a poster taken from a pretty sharp angle, and make the poster appear head-on. Below are the results of each original image with its correspondences labeled, as well as its rectification.
Quadrilateral -> Square
Laptop -> Top-Down View
Poster -> Front-Facing
Blend the Images into a MosaicΒΆ
Now, I finally use the warping to createa mosaic! I start by first using the web tool to select corresponding points between the left and right images for each scene. Then, I find the H that warps the left image to the right image. Now, I have two images in the same space- warped_im_1 and im_2, and can add them. In order to add them nicely, I first create a canvas that is large enough. My warp function also returns the bounding boxes, and I use the coordinates of the bounding boxes in order to figure out the coordinates where the two images overlap. Once I know the overlap region, I can use that to make the canvas just the right size, as well as to fill the corresponding regions of the canvas. Essentially, I calculate row, col offsets using the bounding boxes, and then on my canvas, I place im_1_warped, and then place im2 offset by these offsets.
However, as seen below, just this is not enough as it may lead to some artifacts in the overlapping region, like sharp edges. Thus, to remedy this, I use alpha blending for the overlapping region. From above, since I know what part of the canvas corresponds to overlapping region, I go back to this section and replace the pixel values with convex combinations of the left and right image. My coefficient alpha decreases linearly as I move left to right in the overlapping region. This means that right at the left edge of the overlap, the image is mostly im_1_warped, at the right edge it's mostly im2, and an appropriate convex combination in between.
Below, I demonstrate the results for each of my scenes. For each scene, I show 1a, 1b. the left and right images with their correspondences, 2a. the mosaic without alpha blending (left), and 2b. the mosaic with alpha blending (right).