GNSS Inertial Solutions – Mapping with UAV’s


I have been looking around lately for a GPS+IMU solution that can be carried in our portable mapping pods to enable the direct geo-referencing of aerial imagery, an minimise the use of ground control. Whilst checking out the Applanix web site I came across this really cool webanir on  GNSS Inertial Solutions for UAV mapping. Its a really comprehensive look at this technology as applied to UAV mapping. As you would expect being sponsored by Applanix they push there own GNSS solutions a bit, which is fair enough. Apart from just GNSS they also talk about different types of airborne sensors, including a UAV hyper-spectral system. Its packed full of info, and defiantly worth a watch if you are new to the world of UAV remote sensing.  At a bit over 1 hour 20 minutes long its a long one. So grab your self a cuppa, a TimTam or 3 and settle back and enjoy.



Ever wondered how Airborne LIDAR Works?

Whist mucking about on the interweb, I came across this great video on how LIDAR works. It does a great job of explaining a complex subject, so enjoy!








Portable Mapping Pod Update

As I have alluded to in earlier posts I have been working on a portable mapping pod system, over in my day job at Aerial Acquisitions. Designing, building, testing and refining the pods and sensors has been a long and on-going exercise. So it has been great to see the first versions of our systems taking flight around the country on various projects.

Mapping pods installed on our company 172 for early test flights around Sydney
Mapping pod installed on our company 172 for early test flights around Sydney

One of these projects was a forestry project in the wilds of Tasmania, which is turning out to be a great test of the system and our mapping pod concept. Now I have to admit to being a little nervous to sending a system to Tasmania, far away from technical support on one of its first major hit outs. Aerial survey tends to have an amendment to Murphy ’s Law and that is that if something can fail it will, and the likely hood of failure is increased with distance from home base. So far Murphy has been kind and only thrown us two small issues, both easily resolved.

Why Tassie is a good test for the mapping pod

Tasmania is a difficult place to capture imagery, due to really variable weather and lots of cloud cover. As the saying goes, if you don’t like the weather in Tasmania just wait an hour! Constant cloud makes capturing imagery difficult, and you may only get 2-3 cloud free days per month to capture imagery. Often you may only get ½ day clear either in the morning or afternoon. So this presents some interesting problems for image capture. It’s not economically viable to have a survey aircraft and crew sitting on the ground for 28 days per month watching cloud drift past from the comfort of a bar stool at the local pub (been there!). But to take advantage of the short gaps in cloud you need a local aircraft on the spot that can be mobilised quickly. So enter the portable mapping pod strapped to a local aircraft, using local aircrews that know the weather.

Project Methodology

I thought I would briefly outline our methodology of our project in Tasmania, as its fairly typical of how I see our pods operating once released into the wild.

Getting there – Mobilisation

A major design feature of our portable pods is that they had to be completely portable. So we designed them to fit into a small pelican case, roughly the size of a small suitcase to make travel and shipping easy. All packed in its case ready to ship the system only weighs around 8kg, allowing it to be taken with you as luggage on a commercial flight, or sent via mail or courier. For this project my business partner and I opted to jump on a commercial flight and take the pod with us as luggage. As we wanted to train some local pilots on the system and oversee the initial install on a local aircraft. Now that we have a local aircraft and pilots lined up, for any future projects we will simply ship the pod down.

This is an early test version of one of our mapping pods, packed up ready for shipping. As well as the DSLR there is also a Tetracam MCA6 in the pelican case

Being able to jump on a commercial flight with the system offers large savings over having to ferry our company survey aircraft down. I estimated that to ferry our company Cessna 172 from Sydney to Tasmania would have been around 15 hours return depending on winds and cost around $8k AUD including pilot wages and expenses. In contrast to this two of us were able to fly down on a burner for less than $1k. For future projects the savings are even greater as I can simply ship the system to Tasmania for a few hundred dollars. If I were to take on a project in Perth or Darwin the savings in mobilisation would be even greater.

Fitting to a Local aircraft

For this project we were able to find a local Cessna 172XP which we were able to fit our pod to in around 40 minutes. The 172 XP is a nifty aircraft it has a bugger donk than the standard 172, it also has a constant speed propeller and is fuel injected giving slightly better cruise and climb performance than the standard 172. Normally the increased performance of the XP would not really matter. But for this project a number of our coupes need to be flown at close to 10 000 ft above sea level, so the increased climb performance will come in handy.

mapping pod install
Mapping Pod being fitted – In this photo the step is being removed ready to fit the pod
Step to be removed
Step that need to be removed in fitting pod

We found a local engineer who was able to oversee the fitting of the pod in around 40 minutes to the aircraft. Fitting of the pod is really simple. Firstly the co-pilot step is removed from the landing gear leg by removing 3 bolts. Our pod then simply fits in place of the step, and the 3 bolts are re-fitted. Lastly a few cable ties are used to secure the single cable from the pod into the aircraft cabin. A single cable runs out to the pod from the flight management system in the aircraft cabin, this triggers the camera and records when the camera fires.

One of the reasons the system is so simple to install is that it does not require aircraft power to operate. Our mapping pods were designed to run independent of aircraft power. Once you start running cables with power out along wing struts, or landing gear legs engineers get understandably nervous and this starts to greatly complicate things.

mapping pod installed
Mapping pod installed and ready for survey. Note clean instillation with only one small cable running out to pod


Flight Management System (FMS)

pod flight managment system
Flight management system installed in C172 cockpit

Our systems are designed to be easily operated by a single pilot only, with no camera operator / navigator required. After hours and hours of flight testing we have finetuned our cockpit set up, to a simple system that works. This comprises of a small touch screen tablet that runs the flight management software, mounted in the eye line of the pilot. Tablet mounting is flexible to allow individual pilots to set up the system to their individual preference and allow easy access.

Once airborne the FMS acts like a big video game and gives the pilot guidance to fly the project. It also fires the camera at pre-planned photo locations, and records a GPS event for each photo. Because the camera is buried in the pod away from the pilot’s sight, the FMS also gives a confirmation on the tablet screen that the camera has fired. This saves the rather expensive exercise of the pilot flying around for hours with a camera that’s not working.

Flight management system in action
Flight management system in action

We were able to train our local pilot on our system who had never flown survey before in around an hour. After flying a number of test runs over the airport he was confident enough to head off into the wild blue yonder and have a go at some real projects. He has now flown over 30 small forestry areas without issue and has commented on how he feels comfortable with the system.

Processing Data

Once the aircraft lands the data card is removed from the Nikon D800 camera, and uploaded to me via the interweb for processing in Agisoft Photoscan. For our Tasmanian project the client only requires simple othrophotos to update their GIS, and our pods are proving ideal for this application.

Having been around in the good old days of aerial survey when we still used 250 ft long rolls of film, it still amazes me how seamless the whole process is now. When I started in aerial photography, turning data around quickly meant processing the film myself in a small tin shed on the airfield after landing. Once the film was processed I would then rushed the still wet film to the office so they could start on the photogrammetry overnight. This wasn’t too bad in winter but that tin shed was not much fun in summer in 36C heat!

Section of a forestry orthophoto created with a portable mapping pod
Section of a forestry orthophoto created with a portable mapping pod

Wrap up

I have to say I think it’s pretty cool that I can be remotely managing an active aerial survey project from a different state without leaving my office. Using our portable mapping pods and a local aircraft and crew to capture the data is proving to be very effective. There is still some fine tuning to be done on transferring data but overall it’s working really well. Next step is to integrate in some more sensor options, with multispectral and thermal coming soon.

What’s really exciting for me is that using portable mapping pods carried by local aircraft has the potential to slash data capture costs over traditional methods, delivering data at bargain basement prices. I see real potential here for our portable mapping pods to be used for a range of projects not just commercial mapping work. I would love to see some of these systems deployed on environmental projects, rapid response events, and humanitarian projects.





Holiday Break

Apologies for the lack of posts over the last few weeks, I have been on a family holiday over the Christmas and new year break. I spent some laptop and phone free time soaking my pasty backside in the ocean down on the beautiful NSW South Coast. I hope everyone had a merry Christmas and a happy new year. I aim to  crank out some more posts in the next week or two so stay tuned…….



First Images out of World View 4

Digital Globe has released the first images out of its World View-4 Satellite. World View-4’s first image was captured on November 26 and features the Yoyogi National Gymnasium as seen above.

World View-4 doing its thing

Here are some quick stats on the World View-4 bird. It’s claimed to have a circular error 90% (CE90) of 3m, which is impressive when you consider the image was taken from an orbit of 617km above the earth’s surface. Even more impressive is the image resolution of 30 cm GSD, which is getting into manned aircraft territory. Before you get too excited this for the PAN (black and white) band only. If you want standard colour imagery (RGB) then the resolution drops to 1.2m GSD, which for a satellite is still rather good. There is also always the possibility of producing 30 cm colour imagery through pan sharpening, but to me this always looks a little unnatural with weird halos and colour tones. If you are wondering what pan sharpened imagery looks like, Google earth uses a lot of it outside of capital cities, especially in Australia.

World View-4 will collect over 680,000 sqkm of data every day, and have a revisit time of around 4.5 days. Revisit time may be an issue if you live somewhere where there are only a few cloud free days per month.

Lastly like most modern satellites data can be collected in stereo meaning that DSM’s and DEM’s can be created from the system.



Investigation of band to band registration accuracy of the Tetracam MCA6 multispectral camera

Whilst cleaning up my office the other day I came across a paper I wrote as a part of my Masters study investigating the band to band accuracy of the Tetracam MCA6. Tetracam MCA4 & 6’s are a commonly used camera in airborne applications, but rolling shutter issues have limited there use in UAV’s. One other issue people found was problems with the band to band aliment which I look onto here. Just a word of warning unless you are into this stuff it can put a glass eye to sleep!


The Tetracam MCA6 (MCA6) is a 6 band multispectral airborne camera system comprised of 6 individual cameras in one housing. This low cost system is small, light weight and is able to be carried by light single engine aircraft and UAV’s. This allows the capture of economical high resolution multispectral aerial imagery and gives the sensor the potential for use in a number of environmental and agricultural fields. The post processing software that comes with the Tetracam MCA6 is called pixel wrench 2 (PW2). PW2 uses a basic camera alignment file to perform the image to image registration of the 6 individual images into a final 6 band TIFF. This algorithm does not take into account the lens distortions of each of the individual cameras. It produces a 6 band TIFF which appears to have good band to band registration in the center of the image. However the registration accuracy decreases as it moves radially out from the center of the image towards the edges of the frame [1,21]. Obvious misalignment errors are apparent towards the edge of the final 6 band TIFF frame. These band to band registration errors may limit the cameras application in high accuracy remote sensing projects where a high level of band to band registration (BBR) is necessary. This paper aims to investigate the level of band to band registration of the MCA6 sensor and the PW2 software. We aim to do this by using a simple tie point method to determine the BBR accuracy across the 6 bands of the image. We will also conduct a small literature review of the principals of lens distortions and transform functions for image registration

Tetracam MCA6 multispectral camera

The miniature multiple camera array hence known referred to as the ‘MCA6’ is a low cost multispectral camera produced by Tetracam Inc ( The MCA6 is a lightweight sensor designed for aerial and UAV operations and was designed primarily for precision agricultural applications. The sensor consists of 6 individual sensors housed in the one unit. Each sensor is comprised of an objective lens with interchangeable band pass filter, progressive shutter and a digital CMOS sensor (Complementary metal-oxide-semiconductor). Two important characteristics of the CMOS sensor are its high noise immunity and low power consumption. Each of the 6 CMOS sensors has a dimension of 6.66 x 5.32 mm and had a pixel size of 5.7 microns. One exposure of the sensor captures 1.3 megapixels per channel giving a total of 7.8 megapixels across the 6 bands. Each pixel in the MCA6 has the option to capture data at either 8 or 10 bit resolution which can be defined by the user.  Each CMOS sensor / channel records this data to its own individual 2 GB flash card. All 6 sensors are synchronized so that they fire simultaneously. One camera is assigned as the ‘master camera’ and this camera is used to set the global settings which are then applied to the other 5 slave cameras labeled 1-5.


Figure 1: Sensitivity of the Tetracam MCA6 CMOS sensor (

For each MCA6 exposure a monochrome image is recorded by each of the six individual cameras. These 6 images are then stacked together to form a 6 band image in Tetracams Pixel Wrench 2 software. Accurate band to band registration is critical if the MCA6 camera is to be used in remote sensing applications.  Any band to band misalignment in the final image could produce errors if any spectral analysis is to be carried out on the image.  This would limit the sensors usefulness in a remote sensing application.

Importance of Band to Band registration

Band-to-band registration (BBR) is the measurement of the spatial position difference between two imaging bands, usually measured in pixels. Casual analysis indicates that BBR errors between bands can be visually noticeable down to about 0.25 pixels. Commercial multispectral (MS) systems have a requirement for sub-pixel level registration [20]

 Good registration between bands is important for spectral analysis where the relative spectral fluxes between bands affect accurate spectral classification of small targets and around target edges [20]. BBR accuracy does not gain additional spatial information. Spatial registration is the first step to improving spectral classification from multispectral images by placing the energy in each band at the same spatial location. This will help gain a more accurate spectrum of each pixel. Determining the BBR is only the first step in a spectral analysis.


Band to Band registration (BBR) using Pixel Wrench 2 software (PW2)

Band to band registration is done automatically in Pixel Wrench 2 software (PW2) using an alignment file. This alignment file is known as the global MCA file. Each MCA6 camera is initially calibrated in the Tetracam factory and supplied with a global MCA file for image processing in PW2. An alignment file can also be generated by the user in the PW2 software if cameras drift out of alignment or filters are changed. This is done by measuring the length and angle of a line between two points on both the master image and each of the slave images. The global MCA file then records the translation, rotation and scaling between each of the 5 slave cameras to the master camera. For each of the slave cameras the X,Y offset in pixels, scale, rotation, and a vignetting coefficient. When each of the 6 images per channel are combined to form a single 6 band image the slave images are corrected / modified using the measured values above forming a single 6 band image. It’s important to note that this Global MCA file does not take into account the lens distortion when the file is calculated or when the 6 band image is created. This leads to an image which tends to have good band to band registration in the centre of the image where lens distortion is minimal but can have large errors of up to a few pixels out towards the edges of the image as the lens distortion increases [1,21].

For lower accuracy remote sensing applications such as basic crop health applications the PW2 software produces an image with a BBR level that many users may find acceptable. However other authors [1,21] have investigated the application of the camera in remote sensing applications. Both papers found that the level of BBR produced by PW2 was not suitable for remote sensing applications due to BBR errors towards the edges of the frames.

The standard band pass filters for the MCA6 are as follows;


Band Center Width Nm
490 10
550 10
680 10
720 10
800 10
900 25


Table 1: MCA6 band combination

Lens Distortion of the MCA 6 camera


Figure 2: Radial distortion of the Tetracam MCA6 camera [1]

Lens distortion plays a major factor in the band to band registration of the MCA6 sensor. A single image from the MCA6 is comprised of 6 individual images. Each Image is captured by an individual camera and then band stacked using PW2 software to form a 6 band image. Each of the 6 individual cameras has its own lens with slightly differing values of lens distortion as shown in the figure above. Figure 2 also shows that all of the lenses exhibit similar distortions to approximately the 100 pixel radial distance. At radial distances of greater than 200 pixels the individual lenses then tend to exhibit differing distortion curves. At the very edge of the frame a distance of 800 pixels there is a 5 pixel difference in the pixel displacement between Channels 2 and 1.

Discussion of lens distortion

Lens distortion is a significant problem in the analysis of digital images, especially in 3D measurement and analysis. [2] There are 3 basic types of lens distortion these are radial distortion, de-centering distortion and thin prism distortion [9]. Off these 3 types of lens distortion radial distortion is by far the most dominant as Zang [10] notes  “it is likely that the distortion function is totally dominated by the radial components, and especially dominated by the first term. It has also been found that any more elaborated modeling not only would not help (negligible when compared with sensor quantization), but also would cause numerical instability”.

Radial distortion is in caused by a radial shift in magnification from the edge of the lens towards the center of the lens [1]. The amount of radial distortion tends to increase with shorter focal length / wider angle lenses. Radial distortion causes a radial shift in the value position of a pixel in an image. Radial distortion can further be broken into two types of distortion these are negative and positive displacement.  Negative displacement radially shifts points towards the origin point of lens distortion, resulting in pincushion distortion effect. Conversely positive displacement shifts points away from the lens distortion origin, resulting in a barrel distortion effect [1,4,5].

The research on lens distortion began in 1919, when A. Conrady first introduced the decentering distortion model. Later, D.C. Brown presented the Brown-Conrady model in 1966 [11]. The Brown-Conrady model has since become widely used and accepted [4,7,9]. The Brown-Conrady model uses an even order polynomial to calculate the radial displacement of a given image point. It is commonly recommended that this polynomial is limited to the first two terms of radial distortion as higher order terms are insignificant in most cases. [10]

The Brown–Conrady model requires prior calculation of radial and tangential distortion coefficients. An accessible approach for the calculation of the coefficients is the utilisation of a planar calibration grid of known geometric properties. Multiple images are generated of the calibration grid from different orientations. An iterative process then estimates both the intrinsic and extrinsic camera parameters based upon point correspondence between the known geometric properties of the scene and the distorted points within the image [1]. Intrinsic camera parameters can include a variety of parameters of a camera including its position and orientation in space. While extrinsic parameters can include the true image center, scale factor, and lens focal length [3].

In order to apply the Brown-Conrady model to correct the image you first need to determine the distortion coefficients the methodology to determine these is explained below. Up until now there have been two broad methods of lens calibration these are the multiple views method. Which uses point correspondences of two or more images (australias software).  There are a number of varying multiple view methodologies. Stein [17] uses epipolar and trilinear constraints, while Fizgibion[18] uses a linear line methodology, and Hartley and Kang [19] use a parameter free methodology. The second group of methodology uses a single view or image and works on the distorting of straight lines captured in the sample image. These models work on the assumption that the perspective camera model applies and calculates radial distortion by measuring how much each line is distorted in the image. Devernay et al, Brauer et al, and Tardif et al [have all proposed these second type of straight line models.

 Lens Calibration of the Tetracam MCA6 camera

 We attempted to calibrate the 6 individual channels of the MCA6 sensor using the Australias camera calibration software and targets. This software performed well for the 4 visible channels of the camera but the targets were not visible in the two IR cameras. This meant that only 4 of the 6 cameras in the MCA6 could be calibrated using this method. We investigated removing the filters to make each camera operate in the visible spectrum but once the filters were removed a number of the cameras were out of focus and again unable to be calibrated using the Australia software. On possible alternative method is to take 3 or more frames of captured aerial imagery from the Tetracam and run it through Agisoft Photoscan Pro in the free model mode. The software then generates tie points, completes a bundle adjustment and is then able to calculate the lens distortions for all 6 channels with the filters installed. This method however is outside the scope of this study so we will refer to the lens distortion profile in figure 2 of another MCA6 camera for this study.

Image Registration

Image registration is a computational method for determining the point-by-point correspondence between two images of a scene, which then may be used to fuse complementary information in the images or estimate the geometric and/or intensity difference between the images. Image to image registration requires two basic steps with the first of these being the determination of a number of suitable control points in each of the images to be registered and the correspondence of these points. The second step is to choose and implement a transform function based on the correspondence of points in step one to the rest of the image. Because of the nonlinear nature of image acquisition systems and, sometimes, the deformable nature of the scene, the images to be registered often have nonlinear geometric differences. [22] For image pairs with a small nonlinear geometric difference often a simple linear transformation can be used to achieve an adequate registration. The image pairs from the Tetracam MCA6 may fall into this category. However if the image pairs have a large geometric difference a simple linear transform may perform adequately and more complex local transforms may be needed to transform them.   The problem becomes even more difficult when the images to be registered contain rigid as well as nonrigid bodies. The transformation function for registration of such images should be able to rigidly register parts of the images while nonrigidly registering the rest. [22]. Knowing a set of corresponding control points in two images, many transformation functions can be used to accurately map the control points to each other. A proper transformation function will map the remaining points in the images accurately as well. Some transformation functions, although mapping corresponding control points to each other accurately, they warp the sensed image too much, causing large errors in the registration away from the control points. Also, since a transformation is computed from the control point correspondences, error in the correspondences will carry over to the transformation function. It is desired for a transformation to smooth the noise and small inaccuracies in the correspondences. Therefore, when noise and inaccuracies are present, approximation methods are preferred over interpolating methods in image registration. [22]

Similarity Transformations (type used by PW2)

The Similarity Transformation is the transformation of Cartesian coordinate systems and is the global translational, rotational, and scaling differences between two images. It can be represented by the formula of ;

X = xs cos θ ys sin θ + h,

Y = xs sin θ + ys cos θ + k.

 In the above formula s, O and (H,K) are the scaling, rotational, and translational differences between two images. If two control points on the images are known then the rotation can be calculated by determining the angle of the line connecting the two points on the images. The scale difference between the two images is calculated by determining the difference in the ratio between the two points in the images. Once the scale and rotational values are know then the translation values can be solved by translating the scale and rotational values into the above formula and solving for the two unknowns h and k. If the correspondences are noisy or inaccurate then it is preferential to use more than two correspondences per image pair. When two or more correspondences are used the scaling, rotational, and translation differences can be calculated using a least squares or clustering method. Clustering methods are ideally used when there are possible large errors or outliers in the correspondences. Least squares are used if the inaccuracies can be modeled by zero-mean noise.

Similarity transformations are for registration of rigid bodies, but the least squares or clustering methods do not use any rigidity constraint to find the parameters. If noise is not zero-mean and/or a large number of outliers exist, the least squares and clustering methods may fail to find the correct transformation parameters. To find the transformation parameters by specifically using the rigidity constraint, in [17], from a large set of corresponding control points, about a dozen correspondences that had the highest match rating and were widely spread over the image domain were selected. Then, transformation parameters were determined from combinations of four points at a time, and the combination that produced a linear transformation closest to a rigid transformation was used to register the images. The idea is to use a few correspondences that are very accurate instead of using a large number of correspondences, some of which are inaccurate.

Since similarity transformations are for registration of rigid-bodies, they can be used to register some aerial and satellite images where the scene is rather flat and the platform is at a distance looking down at a normal angle to the scene. In medical images, bony Structures can be registered with this transformation. Similarity transformation is widely used in the registration of brain images since the brain is contained in a rigid skull and images of the brain taken a short time apart do not have nonlinear geometric differences, if sensor nonlinearities do not exit [22].

Projective and Linear Transformations

If two images are acquired by sensors with no nonlinearities over a reasonably flat area then a projective transform can be used to calculate the relationship between the two images. A projective transform can be described by the following formula;


X = ax + by + c,

Y = dx + ey + f.

A Linear transform has 6 parameters which can be solved for if at least 3 non-linear corresponding points in the imagery are known.  Linear transforms are regarded as a weak transform and have traditionally been used in the registration of both satellite and aerial images [22].

The projective transformation requires that straight lines in the reference image remain straight in the sensed image. If parallel lines remain parallel, affine transformation may be used instead, and if angles between corresponding lines are preserved, the transformation of the Cartesian coordinate system may be used to register the images. Images that can be registered by the transformation of the Cartesian coordinate system can be registered by the affine transformation, and images that can be registered by the affine transformation can be registered by the projective transformation. If a straight line in the reference image maps to a curve in the sensed image, a nonlinear transformation is needed to register the images.

Thin-Plate Splines

Thin-plate splines (TPS) or surface splines [23, 24, 22] are perhaps the most widely used transformation functions in the registration of images with nonlinear geometric differences. It was first used by Goshtasby [25, 22] in the registration of remote sensing images and then by Bookstein [26,22] in the registration of medical images. Given a set of 3-D points as defined by (8), the thin-plate spline interpolating the points is defined by


 Thin-plate splines are not suitable for registration of images with local geometric differences. TPS tend to achieve good accuracy at and near the control points, with larger errors away from the control points. This can be attributed to the fact that logarithmic basis functions, which are rotationally symmetric, are used to define the transformation. When the arrangement of the control points is nonuniform, large errors are obtained in areas where large gaps exist between control points

Radial Basis Functions

When control points are irregularly spaced, radial basis functions cannot model the geometric difference between the images well. Registration accuracy tends to depend on the location of control points. Basis functions that can adapt to the location of points will be more appropriate for the registration of images with irregularly spaced control points, such as aerial images.

Approximation Methods

The transform methods discussed so far, splines, linear, and radial base functions. Use an exact form of interpolation where they attempt to match exactly pairs of points from pairs of images.  Approximation methods differ from this in that they attempt to approximate the position of control points in corresponding images. They do this by using a weighted method of all the available control points in the image.  The approximation function can be given by;


The polynomials encode information about geometric differences between the images in small neighborhoods. Therefore, the method is suitable for the registration of images where a large number of control points is available and sharp geometric differences exist between the images. When the density of control points is rather uniform, local polynomials of degree two will suffice. However, for very irregularly spaced control points, a polynomial of degree two may create holes in the transformation function in areas where large gaps exist between the control points, and polynomials of higher degrees are needed to widen the local functions to cover the holes. Polynomials of high degrees, however, are known to produce fluctuations away from the interpolating points [27, 22].

Piecewise Methods

Piecewise methods function by triangulating control points in the master image and then determining the correspondence of these points to the reference image. Once this is known corresponding triangles can then be generated for the reference image. Piecewise methods operate in the areas of the corresponding triangles of both the master and reference images. There is a number of differing piecewise methods including the Piecewise Linear transform. The piecewise linear transform uses a linear transformation to map the triangle calculated in the master image to that of the reference image.  This results in a continuations transform which functions well when the regions or small of the geometric difference is not large between the master and reference images. However if local deformations are large then this will result in the two triangles not matching and a potentially bad registration. The initial triangulation can have an effect on the outcome of the final registration. In general elongated triangles and those with acute angles tend to negatively affect the registration. There are existing algorithims which function to minimize the angles in the calculate triangles and one such algorithm is known as the Delaunay Triangulation. Piecewise linear and cubic methods are easy methods to implement and can be used to register images with nonlinear geometric differences. Good accuracy can be achieved inside the convex hull of the control points, however outside this area the errors increase rapidly as estimation methods have to be used in this section of the image

Piecewise Approximation Methods

As described above the piecewise method triangulates points in the image pairs forming triangles in the master and reference image. Each of these individual triangles is then considered in the registration of the two images.  This has a number of benefits in that it can perform well using only a small subset of points and it tends to confine any errors or deformities locally within the image. By operating in this way it is able to map complex areas of an image which may be next to one another and treating them individually. A single transform is unable to do this and has to apply one transform across the image. Therefore it will likely not be able to model all the individual geometric differences across the image causing errors in registration.


Measuring the Band to Band registration of the Tetracam MCA6 camera Sample Imagery


To measure the band to band registration a sample image from the MCA6 camera was chosen. The sample aerial image was captured over Charles Sturt University in a Cessna C-172 aircraft. The ground resolution of the image is approximately 50 cm GSD. This image was chosen as it had a good cross section of buildings and vegetation. These clearly identifiable features should give a good visual representation of any band to band registration errors in the image. The 6 individual TIFF’s used to create the final 6 band TIFF are displayed below in figures 3-7.


Figure 3 – Master Camera Band 01


Figure 4 – Image for band 2


Figure 5 – Image for band 3


Figure 6 – Image for band 4


Figure 7 – Image for band 5


Figure 8 – Image for band 6

RGB Image created from the above images in PW2


Figure 9 – 3 band image RGB image constructed from Bands 3,2,1 using PW2 software. Note the obvious BBR errors in the edges and corner of the image.

Figure 10 – Full resolution section of the above 3 band image taken from the top left corner of the image.
Figure 10 – Full resolution section of the above 3 band image taken from the top left corner of the image.
Figure 11 – Full resolution section of the above 3 band image taken from the bottom left of the image
Figure 11 – Full resolution section of the above 3 band image taken from the bottom left of the image
Figure 12 – Full resolution section of the above 3 band image taken from the center top of the image
Figure 12 – Full resolution section of the above 3 band image taken from the center top of the image

CIR Image created from the above images in PW2

Figure 13 - 3 band CIR image constructed from bands 5,2,1 using the PW2 software. Some BBR errors are evident in the edges of the frame however it has performed better than the above RGB example.
Figure 13 – 3 band CIR image constructed from bands 5,2,1 using the PW2 software. Some BBR errors are evident in the edges of the frame however it has performed better than the above RGB example.
Figure 14 – Full resolution section of the 3 band CIR image taken from the bottom left of the image
Figure 14 – Full resolution section of the 3 band CIR image taken from the bottom left of the image
Figure 15 – Full resolution section of the 3 band CIR image taken from the top right of the image
Figure 15 – Full resolution section of the 3 band CIR image taken from the top right of the image

figures 10,11,12,14,15 clearly show visible BBR errors in both the RGB and the CIR images created using PW2 software. The BBR errors can be clearly be seen as a halo effect on the edges of buildings where the bands are misaligned. BBR registration errors can be seen visually at a rate of 0.25 pixels or greater [20] and for most commercial multispectral sensors sub pixel registration is desirable. It should also be noted that from a visual inspection the CIR image (B 5,2,1) seems to have better BBR than the RGB image (B 3,2,1).

Measuring the band to band registration accuracy of the MCA6


To measure and quantify the band to band registration (BBR) accuracy of the MCA6 camera, a simple tie point methodology was used. First a sample 6 band image was created using the PW2 software in TIFF format. The sample image was captured on a test flight over the university from a Cessna 172 aircraft and was captured at an image resolution of 0.5m. The sample image was selected as it has a number of buildings and definable points which allowed the selection of clearly identifiable tie points.

In order to get an even spacing of 30 control points across the image a grid was generated and placed over the image dividing the image into 30 quadrants. Then one control point per quadrant was selected giving a total of 30 evenly spaced control points thought the image. Each of the 30 control points were then identified in each of the 6 spectral bands.   Using ENVI software the image coordinates for each of the 30 control points were then recorded across all 6 bands. Band 1 was selected as the master band and differences in each band were recorded in relation to this band. The BBR errors were then recorded in pixels.

Tie point methodology relies on being able to accurately identify control points in all 6 image bands. Therefore selection of appropriate control points is extremely important. Each control point had to be clearly identifiable to within a pixel across all 6 bands. Where possible control points were selected that had clearly identifiable features. These included the edges of buildings, pitches of roofs, line markings on roads and edges of shadows. Each control point also had to be clearly visible in all 6 bands of the image.

Figure 16 : Shows image segmented into 30 roughly equal grids by dashed white line. Actual control point location within each quadrant is indicated by the red dots.
Figure 16 : Shows image segmented into 30 roughly equal grids by dashed white line. Actual control point location within each quadrant is indicated by the red dots.

Creation of digital Error surfaces

In order to visualize the BBR errors for each band of imagery a digital surface was created for the total error values in each band of the imagery.  This was done using Global Mapper software. Every point observed was imported into global mapper as a text file. Each surface was created by using the image coordinates as XY coordinates. The total error value in pixels was then substituted for a height value to create the digital surfaces. Once the digital surfaces were created contours were then generated at a value of 0.5 pixels using the global mapper software.


Table 1  – Image coordinates for each tie point in all 6 bands of sample image

Band 1 Band 2 Band 3 Band 4 Band 5 Band 6
CP NO x y x y x y x y x y x y
1 89 75 88 76 93 76 90 76 91 76 87 76
2 329 113 329 113 332 114 331 114 331 113 330 113
3 504 52 504 52 507 53 506 54 507 54 506 55
4 684 121 684 121 686 122 685 123 685 122 685 124
5 878 192 878 192 880 193 877 193 878 193 877 193
6 1247 100 1251 97 1252 100 1248 101 1249 100 1250 98
7 81 299 80 300 82 300 82 300 82 299 80 297
8 414 262 414 263 415 263 415 263 415 262 414 263
9 603 301 603 300 604 301 603 301 604 300 604 301
10 796 385 796 386 797 386 795 386 797 385 797 386
11 928 379 928 379 930 381 927 380 928 380 929 379
12 1143 303 1142 303 1145 303 1141 304 1142 303 1143 302
13 156 519 159 520 161 518 161 518 161 518 159 519
14 318 514 318 515 320 514 320 514 320 515 320 516
15 497 626 497 626 497 626 497 625 497 626 497 626
16 823 525 823 525 823 526 823 525 824 524 824 524
17 964 525 965 525 966 526 963 525 965 525 965 525
18 1145 563 1146 563 1147 565 1144 564 1145 563 1146 563
19 31 718 32 719 33 716 33 716 32 716 30 719
20 370 828 372 828 372 827 372 826 372 827 372 829
21 527 795 528 794 527 794 527 793 527 794 527 795
22 708 777 708 777 708 778 708 776 709 777 709 778
23 982 762 982 763 982 764 980 762 982 762 983 763
24 1089 772 1089 773 1090 775 1087 772 1090 772 1091 772
25 143 872 144 872 144 869 145 870 144 870 143 873
26 264 905 264 905 265 903 265 902 265 903 264 907
27 454 890 455 890 455 889 455 887 455 888 456 890
28 811 886 811 885 811 887 811 885 812 886 813 887
29 951 922 951 922 951 925 950 922 951 923 952 923
30 1219 900 1220 901 1220 903 1218 900 1221 900 1223 901

Table 2 – Difference in image coordinate in X



Image Band
GCP 1 2 3 4 5 6
1 0.00 1.00 -4.00 -1.00 -2.00 2.00
2 0.00 0.00 -3.00 -2.00 -2.00 -1.00
3 0.00 0.00 -3.00 -2.00 -3.00 -2.00
4 0.00 0.00 -2.00 -1.00 -1.00 -1.00
5 0.00 0.00 -2.00 1.00 0.00 1.00
6 0.00 -4.00 -5.00 -1.00 -2.00 -3.00
7 0.00 1.00 -1.00 -1.00 -1.00 1.00
8 0.00 0.00 -1.00 -1.00 -1.00 0.00
9 0.00 0.00 -1.00 0.00 -1.00 -1.00
10 0.00 0.00 -1.00 1.00 -1.00 -1.00
11 0.00 0.00 -2.00 1.00 0.00 -1.00
12 0.00 1.00 -2.00 2.00 1.00 0.00
13 0.00 -3.00 -5.00 -5.00 -5.00 -3.00
14 0.00 0.00 -2.00 -2.00 -2.00 -2.00
15 0.00 0.00 0.00 0.00 0.00 0.00
16 0.00 0.00 0.00 0.00 -1.00 -1.00
17 0.00 -1.00 -2.00 1.00 -1.00 -1.00
18 0.00 -1.00 -2.00 1.00 0.00 -1.00
19 0.00 -1.00 -2.00 -2.00 -1.00 1.00
20 0.00 -2.00 -2.00 -2.00 -2.00 -2.00
21 0.00 -1.00 0.00 0.00 0.00 0.00
22 0.00 0.00 0.00 0.00 -1.00 -1.00
23 0.00 0.00 0.00 2.00 0.00 -1.00
24 0.00 0.00 -1.00 2.00 -1.00 -2.00
25 0.00 -1.00 -1.00 -2.00 -1.00 0.00
26 0.00 0.00 -1.00 -1.00 -1.00 0.00
27 0.00 -1.00 -1.00 -1.00 -1.00 -2.00
28 0.00 0.00 0.00 0.00 -1.00 -2.00
29 0.00 0.00 0.00 1.00 0.00 -1.00
30 0.00 -1.00 -1.00 1.00 -2.00 -4.00

Table 3 – Difference in image coordinate in Y


Image Band Number
GCP 1 2 3 4 5 6
1 0 -1 -1 -1 -1 -1
2 0 0 -1 -1 0 0
3 0 0 -1 -2 -2 -3
4 0 0 -1 -2 -1 -3
5 0 0 -1 -1 -1 -1
6 0 3 0 -1 0 2
7 0 -1 -1 -1 0 2
8 0 -1 -1 -1 0 -1
9 0 1 0 0 1 0
10 0 -1 -1 -1 0 -1
11 0 0 -2 -1 -1 0
12 0 0 0 -1 0 1
13 0 -1 1 1 1 0
14 0 -1 0 0 -1 -2
15 0 0 0 1 0 0
16 0 0 -1 0 1 1
17 0 0 -1 0 0 0
18 0 0 -2 -1 0 0
19 0 -1 2 2 2 -1
20 0 0 1 2 1 -1
21 0 1 1 2 1 0
22 0 0 -1 1 0 -1
23 0 -1 -2 0 0 -1
24 0 -1 -3 0 0 0
25 0 0 3 2 2 -1
26 0 0 2 3 2 -2
27 0 0 1 3 2 0
28 0 1 -1 1 0 -1
29 0 0 -3 0 -1 -1
30 0 -1 -3 0 0 -1

Table 4 – Total band alignment error in pixels for each image band


Image Band
CP M 2 3 4 5 6
1 0.00 1.41 4.12 1.41 2.24 2.24
2 0.00 0.00 3.16 2.24 2.00 1.00
3 0.00 0.00 3.16 2.83 3.61 3.61
4 0.00 0.00 2.24 2.24 1.41 3.16
5 0.00 0.00 2.24 1.41 1.00 1.41
6 0.00 5.00 5.00 1.41 2.00 3.61
7 0.00 1.41 1.41 1.41 1.00 2.24
8 0.00 1.00 1.41 1.41 1.00 1.00
9 0.00 1.00 1.00 0.00 1.41 1.00
10 0.00 1.00 1.41 1.41 1.00 1.41
11 0.00 0.00 2.83 1.41 1.00 1.00
12 0.00 1.00 2.00 2.24 1.00 1.00
13 0.00 3.16 5.10 5.10 5.10 3.00
14 0.00 1.00 2.00 2.00 2.24 2.83
15 0.00 0.00 0.00 1.00 0.00 0.00
16 0.00 0.00 1.00 0.00 1.41 1.41
17 0.00 1.00 2.24 1.00 1.00 1.00
18 0.00 1.00 2.83 1.41 0.00 1.00
19 0.00 1.41 2.83 2.83 2.24 1.41
20 0.00 2.00 2.24 2.83 2.24 2.24
21 0.00 1.41 1.00 2.00 1.00 0.00
22 0.00 0.00 1.00 1.00 1.00 1.41
23 0.00 1.00 2.00 2.00 0.00 1.41
24 0.00 1.00 3.16 2.00 1.00 2.00
25 0.00 1.00 3.16 2.83 2.24 1.00
26 0.00 0.00 2.24 3.16 2.24 2.00
27 0.00 1.00 1.41 3.16 2.24 2.00
28 0.00 1.00 1.00 1.00 1.00 2.24
29 0.00 0.00 3.00 1.00 1.00 1.41
30 0.00 1.41 3.16 1.00 2.00 4.12
0.00 29.23 69.36 54.76 46.60 53.17

Total Error in Pixels for each band


Band 1 2 3 4 5 6
Total 0.00 29.23 69.36 54.76 46.60 53.17
Min 0 0 0 0 0 0
Max 0 5 5.1 5.1 5.1 4.12
Mean 0.00 0.97 2.31 1.83 1.55 1.77


Table 5 – Min, Max and Mean total error for each band of imagery

Table 5 shows that band 3 had the highest total error of all 5 bands when related to the master band. Band 2 in comparison had the lowest total error of 29.23 pixels. Band 3 also had the highest average control point error of 2.31 pixels, with band 2 also having the lowest average control point error. Interestingly four of the five bands (2,3,4,5) all had an average error of 5 pixels or above. All four of these bands were only separated by 0.1 of a pixel in their maximum error value. Only band 6 registered a lower maximum error than 5 pixels with a score of 4.12 pixels. Also of note all 5 bands scored a minimum error value of zero, possibly indicating that all bands had good registration in the center of the image.

Figure 17 – Total error in Pixels
Figure 17 – Total error in Pixels
Figure 18 – Average total error in Pixels
Figure 18 – Average total error in Pixels

Total error images as a surface

Figure 19 – Surface generated from the total error in pixels for band 2
Figure 19 – Surface generated from the total error in pixels for band 2

The total error surface for band 2 shows a reasonably clear pattern. There is an area of good registration with errors of 0 – 0.5 pixels in the centre of the image. This area then extends up the middle of the image to the top edge of the image.  The majority of the image seems to have a BBR error of between 0.5 – 2.0 pixels. Areas of high BBR error can be seen in the top right hand corner image with errors around the 4 pixel mark. A small spike can be seen at the bottom left of the image which has a maximum BBR error of around 3 pixels.

Figure 20 - Total error surface for band 3
Figure 20 – Total error surface for band 3

The surface for band 3 shows a clear pattern of BBR error. The overall pattern for band 3 is almost the classic pattern of good BBR in the center of the image and deterioration in BBR as you move out toward the edges of the image. The Blue area in the center of the image indicates a BBR of 0-0.5 pixels which is the accepted minimum for most commercial multispectral sensors. The edges of the image have some rather large errors in the 4-5 pixel range which is quite a large.

Figure 21 – Surface generated from the total error in pixels for band 4
Figure 21 – Surface generated from the total error in pixels for band 4

The surface error plot for band 4 contains two small areas in the center of the image where the BBR error is of an acceptable rate of less than 0.5 pixels. Unlike bands 2 and 3 this image has better BBR in the top right hand corner of the image with errors rates in the 1-2 pixel range. Like bands 2 and 3 however it also has poor BBR in the bottom left hand corner of the image.

Figure 22 - Surface generated from the total error in pixels for band 5
Figure 22 – Surface generated from the total error in pixels for band 5

Band 5 surface error plot seems to take on a very similar structure to that of band 4. It also has two small areas of good registration in the center and center right of the image. These areas are slightly larger in band 5 then they are in band 4. It also has an average error of 1-2 pixels in the top right hand corner of the image

Figure 23 - Surface generated from the total error in pixels for band 6
Figure 23 – Surface generated from the total error in pixels for band 6

The error surface for band 6 follows a similar patter to that of band 3 with an area of good registration at the center of the image and an increasing error as you move towards the edges of the image. There is a spike in error in the bottom left corner of the image which is also visible in the other 5 bands. There are some high levels of BBR error on the top and right hand edges of the image. The majority of the error for this surface is between the ranges of 0.8 – 2.5 pixels.

Total Error Contour Diagrams

Figures 24–28 are contours generated from the total error surfaces. Each contour represents 0.5 pixels in total location error.

Figure 24 – Total error in pixels expressed in contours for band 2
Figure 24 – Total error in pixels expressed in contours for band 2
Figure 25 - Surface generated from the total error in pixels for band 3
Figure 25 – Surface generated from the total error in pixels for band 3
Figure 26 - Total error in pixels expressed in contours for band 4
Figure 26 – Total error in pixels expressed in contours for band 4
Figure 27 - Total error in pixels expressed in contours for band 5
Figure 27 – Total error in pixels expressed in contours for band 5
Figure 28 - Total error in pixels expressed in contours for band 6
Figure 28 – Total error in pixels expressed in contours for band 6


A major assumption at the beginning of this study was that the failure of PW2 software to take into account lens distortion would be a major contributing factor to the level of BBR error in the final 6 band image. As described earlier lens distortion is expected to be minimal in the centre of the image and increases as you move away radially from the center of the image. If this assumption was correct we would expect to see this as a clear pattern in the BBR total error surface plots for each band. This pattern should have an area of good BBR in the center of the image. From here the error should steadily increase as you move out towards the edges of the frame. If we were to use figure 2 as a guide to the possible lens distortion of each of the cameras. Then we would expect the level of error to increase gradually until a distance of around 600 pixels from the center of the image. From this point the lens distortion increases rapidly in an almost exponential fashion and we would expect the BBR surface error plot to follow a similar patter with rapidly increasing error. This is assuming a perfect lens with a constant lens distortion that follows an increasing concentric circle pattern. In reality no lens is perfect and each lens has small imperfections in the way they are ground / manufactured. So for real lenses it is unlikely that the distortion profile would ever look like perfect concentric circles more like topographic contours on a map.

When we have a look at the BBR total error surface plots for all 5 bands (figures 19-23) we can start to see a common trend across the images. The error surfaces for all of the bands have an area of good registration in the center of the image. All of the surfaces also have an increasing BBR error towards the edges of the image. The actual level of the BBR error at the edges of individual bands varies but they all follow a similar pattern. All of the surfaces have a common feature which is an area of high BBR error in the bottom left hand corner of the image. Initially a bad control point was suspected as the cause of this cluster of error across all bands. Points in that area of the image were double checked and the surfaces were re-created but the error remained. The fact that this error propagates across all six bands rules out any contaminates on the lens surface such as aircraft exhaust residue or oil. One possible explanation is that the sensor was not pointing at nadir at the time of exposure. This could be due to the aircraft pitch or roll at time of exposure. An examination of the image visually shows some building lean in buildings and trees in this bottom left corner of the image. Conversely buildings in the top right of the image do not seem to have the same level of lean. This indicates that the camera was not pointed at nadir at the time of exposure. Given the slightly more oblique angle to nadir of this part of the image it may serve to exaggerate distortions in the image. This could possibly be responsible for  the high error cluster in this part of the image.

One other factor which could cause possible BBR errors in the imagery is the fact that each of the cameras contains a CMOS sensor using a rolling shutter. Rolling shutters are generally not preferential for aerial imagery as the forward motion of the aircraft can combine with the rolling shutter to produce artifacts in the imagery. Normally this is seen in the stretching or compressing of pixels. We have encountered this problem many times before when performing photogrammetry and aerial triangulation on other client’s imagery captured with digital SLR’s. These artifacts can also be caused by aircraft pitch, roll, or yaw during the time of exposure. It is possible that the aircraft roll may have combined with the shutter to produce some artifacts in the bottom left corner of the image.

The PW2 software uses a global alignment file created from one set of 6 images. It assumes that all the cameras will fire at the same instant in time and with the same time bias as the image used to create the global alignment file. However in reality there will always be small differences in the time each camera triggers an exposure. If the camera was only experiencing forward motion then this difference would be expressed as a shift in the X image coordinate. However an aircraft is a dynamic environment with movement around all 3 axes. Therefore this error could also translate into the Y image coordinate if the aircraft is yawing or rolling at time of exposure. The sample image was captured with a forward ground speed of 90 knots or 46 meters per second. Therefore even a small difference between cameras being fired of 1/1000 of a second could result in the aircraft traveling a distance of 0.046 meters. This would translate to an image movement of almost 1 pixel at 50 cm resolution.

Also of interest in figure 1 it can be seen that there is up to a 5 pixel difference in the amount of radial distortion between cameras. The maximum error we encountered in this study is also in that range of 5 pixels. This error was encountered at the edges of the frame where the 5 pixel distortion was recorded in the lens radial distortion plots. Thus the difference in lens distortions between bands may be a critical factor in the BBR.

From the literature review on transforms it seems that the PW2 software uses a simple similarity transform to register the MCA6 images together. Similarity transforms can be used to register some aerial and satellite images where the scene is rather flat and the platform is at a distance looking down at a normal angle to the scene [16]. It is also noted that similarity transforms do not cope well with sensor nonlinearities. These two factors are likely where the BBR errors are being generated in the PW2 software. Firstly as noted above the sample image does not have a uniform or normal viewing angle across the entire scene. Secondly the individual bands of the MCA6 imagery contain a number of non-linear geometric differences between bands. These differences are due to a non uniform or normal viewing across the entire scene. This is due to the sensor not being at nadir at time of exposure, terrain and lens distortion. Secondly and to a lesser extent artifacts caused in the imagery by the rolling shutter and forward movement of the camera may also be causing non linear geometric differences between bands and degrading the BBR. From the BBR total error plots it can be seen that the similarity transform does a good job in the center of the image where there is a uniform viewing angle and minimal non linear geometric differences. However as you move out from the edge of the image where the viewing angle becomes non uniform and non linear geometric errors increase the similarity transform starts to perform poorly.


In conclusion the total error surface plots seem to show that the transform used by the PW2 software performs well in the center of the image and that the BBR accuracy decreases as you move radially out towards the edges of the image. The actual level of the BBR error at the edges of each band varied slightly but all bands followed this trend. This pattern of BBR error exhibited by all bands tends to suggest that the fact lens distortion is not taken into consideration by the PW2 software is having a negative effect on the BBR of the final image. It is also noted that similarity transforms like the one used by PW2 software do not perform well when non linear geometric differences are present. Variations in lens distortion between lenses are a likely source of these non linear geometric differences. To a lesser extent artifacts caused by the CMOS sensor rolling shutter may also be having a negative effect on the BBR. These errors would be harder to detect and further study into this would be required.

Any BBR above a value of 0.25 pixels is visible in the image to the human eye [16]. Most commercial multispectral sensors and satellites have a requirement for sub pixel BBR. Small parts of all bands of the MCA6 camera meet this sub pixel requirement.  Overall however the sensor would fail to meet the sub pixel BBR accuracy requirements. This would limit the sensors applications to projects requiring less radiometric accuracy and exclude it from high accuracy remote sensing projects.

Other operators [1,21] have had success improving the accuracy of the BBR of the Tetracam MCA6 camera using proprietary algorithms and other software packages to perform the band stacking. Both of these methods required reasonable operator input. Due to the small nature of the sensor often hundreds or thousands of frames are needed to cover a project area. Therefore any new band stacking methodology needs to be able to be automated with no operator input and computationally simple so that numerous images can be processed quickly.  Therefore further research is needed into other image transforms and software packages which may be able to improve the BBR of the Tetracam MCA6 camera and increase its potential as a cost effective remote sensing tool.


  1. Kelcey, J.; Lucieer, A.; Sensor correction of a 6 band Multispectral Imaging Sensor for UAV remote sensing.
  1. Wang, A.; Qiu, T.; Shao, L. A simple method of radial distortion correction with centre of distortion estimation. J. Math. Imag. Vis. 2009, 35, 165–172.
  1. Prescott, B. Line-based correction of radial lens distortion. Graph. Model. Image Process. 1997,59, 39–47.
  1. Hugemann, W. Correcting Lens Distortions in Digital Photographs; Ingenieurb¨uro Morawski +Hugemann: Leverkusen, Germany, 2010
  1. Park, J.; Byun, S.C.; Lee, B.U. Lens distortion correction using ideal image coordinates. IEEETrans. Consum. Electron. 2009, 55, 987–991.
  1. Jedliˇcka, J.; Potˇckov´a, M. Correction of Radial Distortion in Digital Images; Charles Universityin Prague: Prague, Czech, 2006.
  1. de Villiers, J.P.; Leuschner, F.W.; Geldenhuys, R. Modeling of radial asymmetry in lens distortionfacilitated by modern optimization techniques. Proc. SPIE 2010, 7539, 75390J:1–75390J:8.
  1. Wang, J.; Shi, F.; Zhang, J.; Liu, Y. A New Calibration Model and Method of Camera LensDistortion. In Proceedings of 2006 IEEE/RSJ Int. Conf. Intell. Robot. Syst., Beijing, China, 9–15October 2006; pp. 5713–5718.
  1. Wang, J., Shi, F., Zhang, J., Liu, Y.: A new calibration model and method of camera lens distortion. PR 41(2), 607–615 (2008)
  1. Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000)
  1. Clark, T.A., Fryer, J.G.: The development of camera calibration methods and models. Photogramm. Rec. 16(9), 51–66 (1998)
  1. Tsai, R.: An efficient and accurate camera calibration technique for 3-D machine vision. In: IEEE Proc. CCVPR, pp. 364–374 (1986)
  1. Tsai, R.Y.: A versatile camera calibration technique for high accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom. 3(4), 323–344 (1987)
  1. Devernay, F., Faugeras, O.: Straight lines have to be straight. Mach. Vis. Appl. 13, 14–24 (2001)
  1. Brauer-Burchardt, C., Voss, K.: Automatic lens distortion calibration using single views. Mustererkennung 1, 187–194 (2000)
  1. Tardif, J.-F., Sturm, P., Roy, S.: Self-calibration of a general radially symmetric distortion model. In: ECCV (2006)
  1. Stein, G.P.: Lens distortion calibration using point correspondences. In: Proc. CVPR, pp. 602–608 (1997)
  1. Fitzgibbon, A.W.: Simultaneous linear estimation of multiple view geometry and lens distortion. In: CVPR, pp. 125–132 (2001)
  1. Hartley, R.I., Kang, S.B.: Parameter-free radial distortion correction with centre of distortion estimation. PAMI 29(8), 1309–1321(2007)
  1. Goforth, M.A. : Sub-Pixel registration Assessment of Multispectral Imagery. Goforth Scientific
  1. Laliberte, A.S (complete this bit) Multispectral Remote Sensing from Unmanned Aircraft: Image processing workflows and applications for rangeland environments.
  1. R. L. Harder and R. N. Desmarais, “Interpolation using surface splines,” J. Aircraft, 9,2, pp. 189–191, 1972.
  1. J. Meinguet, “An Intrinsic approach to multivariate spline interpolation at arbitrarypoints, in Polynomial and Spline Approximation, B. N. Sahney (Ed.), D. Reidel Publishing Company, pp. 163–190, 1979.
  1. A. Goshtasby, “Registration of image with geometric distortion,” IEEE Trans. Geoscienceand Remote Sensing, 26, 1, pp. 60–64, 1988.
  1. F. L. Bookstein, “Principal warps: thin-plate splines and the decomposition of deformations,” IEEE Trans. Pattern Analysis and Machine Intelligence, 11, 6, pp. 567–585, 1989
  1. A. Goshtasby, “Image registration by local approximation methods,” Image and Vision Computing, 6, 4, pp. 255–261, 1988.




Kinematic Ground Control?

Kinematic Ground Control…….No I haven’t had to many beers as I type this. If you think about it, its actually a cleaver idea and being used by a mob called MapKite. The idea is fairly simple, in that the ground control actually moves with the UAV so that its visible in most frames. Minimizing the need for traditional old fashioned static ground control targets. I will just give you second to think about that before going on………….. got it, good. Traditional control can be time consuming and costly to lay, especially on long corridor projects, which are where MapKite are implementing this technology. In a nut shell they have a ground control target strapped to the roof of a van (see pic above). The position of this van is known thanks to real time RTK GPS, hence the position of the target is also known. The drone fly’s along above the van capturing imagery at the same time the van is capturing terrestrial LIDAR. Using machine vision the drone is also able to follow the target to stay with the van. Post flight the two data sets can then be merged. Pretty cool…….

For a better explanation on how it all works check out the video below.


Camera Porn

Some interesting camera propaganda came across my inbox the xib-lens-cable-icgother day, a data sheet on the Ximea CB500CG-CM. While it doesn’t have the most imaginative name, it does potentially look to be an awesome little camera for remote sensing applications. It’s got 50 megapixels of 12 bit CMOS goodness and can crank out an impressive 26 frames per second. It crams its 50 million, 4.5 micron pixels into a full size 35 mm sensor and accepts EFmount Canon lenses. If EF mount lens sounds familiar that’s because it was the standard mount for the EOS family. With over 100 million lenses produced so far, that’s a lot of glass to pick from. There are a few variations of the EF family of lenses and it’s not clear which ones this camera body will accept.

Canon has had a 50 mpxl DSLR on the market for a while now, so what’s the big deal about this camera?  Well the fact that a 50 mpxl DSLR has been packed down into a small form factor ‘machine vision’ or ‘Industrial’ camera. Whose small form factor, low weight, and lack of moving parts make it perfect for stuffing in small to medium sized UAV airframes.

One potential advantages of this system is the weight of the camera ximea50at poultry 175 grams for the body only. By the time you add 250-600g of lens you are looking at a sensor weight of 425-850g. Unlike DSLR cameras the CB500CG-CM needs some type of imbedded computing to run. So this could add another 150-300g depending on option chosen, bringing the total weight to around the 1 kg mark. A little heavy for smaller UAV’s but in the mix for medium sized airframes.  Its small form factor also lends it to applications in smaller airframes.

It’s unclear from the documentation but I believe that this camera may actually have a global shutter, which is desirable for airborne remote sensing. I have written for clarification on this and will update you when I know more. It also has the ability to synchronise multiple cameras which opens up options for all sorts of multi camera rigs.

So if this camera is so good we will see them crammed into UAV’s all over the world. Well not likely and the one big hurdle is going to be price. Unlike DSLR’s where millions of units are produced, machine vision cameras are produced in small numbers, driving the price upwards. I have contacted Ximea for some pricing and will update you when I know.


Portable Mapping Pod Take to the Skies

For around 12 months now I have been working on a portable mapping pod system for Cessna 172 aircraft over at Aerial Acquisitions. Its been a long road in design and development of the system and we are now at the fun part getting to test fly our first systems. Here is a quick video of one of our test flights a few weeks ago. Video is a bit rough but gives you the general idea. If you want more info on the pod it can be found at my day job over at Aerial Acquisitions


Specim FX Series Cameras

Check out the new FX series of cameras from Specim, which could have real applications in airborne remote sensing. From initial impressions it seems very promising. (especially for the portable camera pods I have been working on) They have a good spectral range from 400-1000nm, which covers most environmental and precision agriculture applications. From this spectral range the user can choose from 220 narrow spectral bands, more than enough to keep any remote sensing nerd happy for days. In a small camera like this signal to noise may be an issue, so it would be interesting to get some hard numbers on this. One limitation for UAV use is the fact that the camera would need some type of embedded computer to run. This may not be a problem in bigger airframes but may be difficult on smaller quadcopters. Speaking of weight, the sensor unit is a tidy 1.4 kg and reasonably compact at 150 x 85 x 71mm.

Check out the propaganda video below and I hope to bring you some more information soon