John Rama had a question about the image processing exercise that I thought might be valuable for sharing with the group.
"The attached word file has the MultiSpec Image and what I think is the Statistical Analysis that goes with this image. I would like to have a discussion about what it all means. It may be so poorly done that it is not worth discussing. I would like to see something more than just pretty colors but need help understanding what information can be pulled from the analysis - even if it is information that says this image is not useable. Thoughts.
John R."
Hi John, I agree, thank you for your effort here. It is definitely worth discussing.
The information on the left refers to the statistical probability that each pixel in the category is classified correctly. In a supervised classification model (where you "train" the classification by selecting representative samples first) you determine the classification scheme. In a unsupervised classification, the program automatically classifies the image into a predetermined number of categories based on a sampling method and no training samples.
It looks like you ran your classification process properly. The issue I see here is that the visible and NIR images were not registered (spatially aligned) to each other. If you look carefully you will see that the house appears twice in the image, with a considerable offset. Once you see this you will probably recognize that all of the features maintain this offset. This means that any given pixel in the composite image will contain information from two different features (e.g., house/field, road/trees, etc.). Any statistical analysis of the pixels will therefor not be a valid representation of the site on the ground.
Also don't forget that when properly registered, the part of the image that extends beyond the overlap must be trimmed off in your image processing software. Pixels that contain information from only one image will distort the calculations from the overall image.
As for the statistics:
Classification of Training Fields provides information about the likelihood that a pixel in your selection was actually representative of the overall selection (a patch of dirt in a selection of grass will read as incorrect because it is not representative of the rest of the selection).
If you look at the column 1 "Grass", you will see that there are 19776 pixels in the training selection that are identified as Class Number 1, 0 are identified as Class Number 2, 65 are identified as Class Number 3, 8 are identified as Class Number 4, 8 are identified as Class Number 5, and 74 are identified as Class Number 6. This means that your training selection for "Grass" also contained what the system identified as 65 pixels of "Road", 8 pixels of "Car", 8 pixels of "Trees", and 74 pixels of "Field".
Training Class Performance tells you how the pixels in the image ended up being classified based on the training selections. Note that the Total and Reliability Accuracy rows at the bottom of the table are not properly aligned with the columns above. They need to shift left.
Class Distribution for Selected Area displays the actual results of the entire image classification. You can see here how many pixels of each class were identified, and the percentage of the total image that they comprise.
Again, because your images were not aligned, the classes do not represent the real classes on the ground because the pixels are mixed. If you go back and align and crop the images you should get much more accurate results.
Classification of images is very important because it allows us to quantify the data and make it available for analysis. It is only one of many things we can do with the images. We will post another tutorial next week on producing a vegetation index, which can tell you how much photosynthetic vegetation is in the image, as well as the status of photosynthetic plants.
I hope this is helpful, and not too confusing. Thoughts or questions?
I had a question about when to crop the images. At what point in the process should this happen?
ReplyDeleteCropping should take place after the images are aligned in the graphics program (GIMP in this case). In the Processing of Kite Imagery instruction, this takes place at the end of Step 3.6.
ReplyDelete