Skip to content Skip to sidebar Skip to footer

Image Stitching Problem Using Python And Opencv

I got output like below after stitching result of 24 stitched images to next 25th image. Before that stitching was good. Is anyone aware of why/when output of stitching comes like

Solution 1:

Firstly, I was not able to recreate your problem and solve it as the images were too big for my system to process. However, I had faced the same problem in my Panorama Stitching project, so I am sharing the reason behind it and my approach to solving my problem. Hope this helps you too.

Here's what my problem looked like when I stitched 4 images together just like you did.

My problem

As you can see, the 4th image was getting distorted a lot which must not happen. The same thing happened with you but on a greater level.

Now, here's the output when I stitched 8 images after some image pre-processing.

Output after image pre-processing

After some pre-processing on the input images, I was able to stitch 8 images together perfectly without any distortion.

To understand the exact reason behind this kind of distortion, watch this video by Joseph Redmon between 50:26 - 1:07:23.

As suggested in the video, we'll first have to project the images onto a cylinder and then unroll them and then stitch these unrolled images together.

Below is the initial input image(left) and the image after projection and unrolling onto a cylinder(right).

Image before and after pre-processing

For your problem, as you are using satellite images, I guess projection onto a sphere would work better than the cylinder however you'll have to give it a try.

Sharing below my code for projecting the image onto a cylinder and unrolling it for reference. The mathematics used behind it is the same as given in the video.

defConvert_xy(x, y):
    global center, f

    xt = ( f * np.tan( (x - center[0]) / f ) ) + center[0]
    yt = ( (y - center[1]) / np.cos( (x - center[0]) / f ) ) + center[1]
    
    return xt, yt


defProjectOntoCylinder(InitialImage):
    global w, h, center, f
    h, w = InitialImage.shape[:2]
    center = [w // 2, h // 2]
    f = 1100# 1100 field; 1000 Sun; 1500 Rainier; 1050 Helens# Creating a blank transformed image
    TransformedImage = np.zeros(InitialImage.shape, dtype=np.uint8)
    
    # Storing all coordinates of the transformed image in 2 arrays (x and y coordinates)
    AllCoordinates_of_ti =  np.array([np.array([i, j]) for i inrange(w) for j inrange(h)])
    ti_x = AllCoordinates_of_ti[:, 0]
    ti_y = AllCoordinates_of_ti[:, 1]
    
    # Finding corresponding coordinates of the transformed image in the initial image
    ii_x, ii_y = Convert_xy(ti_x, ti_y)

    # Rounding off the coordinate values to get exact pixel values (top-left corner)
    ii_tl_x = ii_x.astype(int)
    ii_tl_y = ii_y.astype(int)

    # Finding transformed image points whose corresponding # initial image points lies inside the initial image
    GoodIndices = (ii_tl_x >= 0) * (ii_tl_x <= (w-2)) * \
                  (ii_tl_y >= 0) * (ii_tl_y <= (h-2))

    # Removing all the outside points from everywhere
    ti_x = ti_x[GoodIndices]
    ti_y = ti_y[GoodIndices]
    
    ii_x = ii_x[GoodIndices]
    ii_y = ii_y[GoodIndices]

    ii_tl_x = ii_tl_x[GoodIndices]
    ii_tl_y = ii_tl_y[GoodIndices]

    # Bilinear interpolation
    dx = ii_x - ii_tl_x
    dy = ii_y - ii_tl_y

    weight_tl = (1.0 - dx) * (1.0 - dy)
    weight_tr = (dx)       * (1.0 - dy)
    weight_bl = (1.0 - dx) * (dy)
    weight_br = (dx)       * (dy)
    
    TransformedImage[ti_y, ti_x, :] = ( weight_tl[:, None] * InitialImage[ii_tl_y,     ii_tl_x,     :] ) + \
                                      ( weight_tr[:, None] * InitialImage[ii_tl_y,     ii_tl_x + 1, :] ) + \
                                      ( weight_bl[:, None] * InitialImage[ii_tl_y + 1, ii_tl_x,     :] ) + \
                                      ( weight_br[:, None] * InitialImage[ii_tl_y + 1, ii_tl_x + 1, :] )


    # Getting x coorinate to remove black region from right and left in the transformed image
    min_x = min(ti_x)

    # Cropping out the black region from both sides (using symmetricity)
    TransformedImage = TransformedImage[:, min_x : -min_x, :]

    return TransformedImage, ti_x-min_x, ti_y

You just have to call the function ProjectOntoCylinder and pass it an image to get the resultant image and the coordinates of white pixels in the mask image. Use the code below to call this function and get the mask image.

# Applying Cylindrical projection on Image
Image_Cyl, mask_x, mask_y = ProjectOntoCylinder(Image)

# Getting Image Mask
Image_Mask = np.zeros(Image_Cyl.shape, dtype=np.uint8)
Image_Mask[mask_y, mask_x, :] = 255

Here are links to my project and its detailed documentation for reference:

Part 1: Source Code, Documentation

Part 2: Source Code, Documentation

Post a Comment for "Image Stitching Problem Using Python And Opencv"