Tuesday, December 14, 2010

Summary of the December 13, 2010 ICCARS PLC Teleconference

Equipment Used: Audio—Phone Conference provided by Wayne RESA; Video—Adobe Connect, provided by Wayne RESA

You can listen and view the Videoconference by visiting:
http://remc.adobeconnect.com/p47275523

4:15 – 4:30 – Login

4:30 – 6:00 – Teleconference

Attendees: John Bayerl, Lynn Bradley, Wanda Bryant, Russell Columbus, Erica Conley-Shannon, Greg Dombro, Jennifer Gorsline, Tom Green, Dan Neil, Kathleen O’Connor, Deena Parks, John Rama, Darcie Ruby, Bruce Szczechowski, and Yichun Xie.

Absent: Laura Amatulli and Caroline Chuby

Hosts: David Bydlowski and Andy Henry



Agenda and Notes:

1. Audio Test

2. Update on our participation in the Michigan Climate Coalition and Kathleen O’Connor’s participation in the Condition 1 project.

3. Update on iPad Update to 4.2.1 and the use of RSS Readers.

4. Review of Units that were turned in on December 10.

a. Theme, Time Span, Alignment to Standards, Identification of Key Knowledge and Skills

b. Units are posted at: http://geodata.acad.emich.edu/iccars/ in Resources and then Lesson Plans

5. Next Assignment – Write the Driving Questions for each unit. Due January 7.

6. Group Sharing

a. Kathleen spoke about her participation in Condition 1.

b. No major issues with the iPad Update or RSS Readers

c. Many groups commented on their units. A few questions were asked about Driving Questions. In particular—how many driving questions are appropriate within a unit.

7. Information about Climate Change based on the Yale Report-America’s Knowledge of Climate Change and the presentation given at the 2010 UN Climate Conference.

a. It was recommended that participants download the report and possibly use some of the questions with their students. It also provides a way of getting a better understanding of the misconceptions associated with climate change.

b. http://environment.yale.edu/climate/news/knowledge-of-climate-change

c. Yale presentation at the UN Climate Conference: http://environment.yale.edu/climate

d. Global Warming’s “Six Americans.” – What they think, why they think, and the questions they would ask.

8. Group Sharing—participants said the information was enlightening and informative.

9. Information about Remote Sensing

a. Remote Processing Process – Statement of the problem; Identification of In Situ and Remote Sensing Data Requirements; Remote Sensing Data Collection; Remote Sensing Data Analysis; Information Presentation

b. AEROKATS TwinCam Image Processing/Classification Steps – Acquire Imagery from Sensor; Preprocess Imagery; Process Imagery-Supervised Classification

c. John Rama spoke to the problems that he has had in the process to help others see what problems can arise.

d. MultiSpec Tutorials

e. Earth Observation Systems – NASA Global Climate Change Website and NASA/JPL Eyes on the Earth 3D

f. http://climate.nasa.gov/index.cfm

g. http://climate.nasa.gov/Eyes/eyes.html

h. Categories of EOS Missions—14 Satellites (8 atmosphere; 2 oceans; 4 land)

i. EOS Data Sources – NASA/GSFC Global Change Master Directory; CEOS Climate Diagnostics; USGS Earth Explorer; USGS GloVis and MODIS Web

j. http://gcmd.nasa.gov

k. http://idn.ceos.org

l. http://edcsns17.cr.usgs.gov/EarthExplorer

m. http://glovis.usgs.gov

n. http://modis.gsfc.nasa.gov

10. Group Sharing—Majority of the discussion centered around the difficulty of the process and that many had a general understanding but not a working understanding. It was also stated that there is a disconnect between remote sensing and climate change, in terms of understanding. Some participants suggested that we meet over the Holiday Break to work on Image Processing and related issues. Wednesday, December 22 from 9:00 – 3:00 was selected as the date to do this, at Wayne RESA.

11. Next PLC Teleconference will take place at 4:30 (EST) on January 10, 2011.

Editors Note: The PLC teleconference went pretty smooth. The major problem was that it went 30 minutes too long. As hosts, we have to work on working within the time constraints. But special thanks go out to all of the participants who not only stayed on, but actively participated. It is also very impressive that the group wanted to meet on their own time, to improve their skills and understanding.

Monday, December 6, 2010

Image Processing Question

John Rama had a question about the image processing exercise that I thought might be valuable for sharing with the group.

"The attached word file has the MultiSpec Image and what I think is the Statistical Analysis that goes with this image. I would like to have a discussion about what it all means. It may be so poorly done that it is not worth discussing. I would like to see something more than just pretty colors but need help understanding what information can be pulled from the analysis - even if it is information that says this image is not useable. Thoughts.
John R."




Hi John, I agree, thank you for your effort here. It is definitely worth discussing.

The information on the left refers to the statistical probability that each pixel in the category is classified correctly. In a supervised classification model (where you "train" the classification by selecting representative samples first) you determine the classification scheme. In a unsupervised classification, the program automatically classifies the image into a predetermined number of categories based on a sampling method and no training samples.

It looks like you ran your classification process properly. The issue I see here is that the visible and NIR images were not registered (spatially aligned) to each other. If you look carefully you will see that the house appears twice in the image, with a considerable offset. Once you see this you will probably recognize that all of the features maintain this offset. This means that any given pixel in the composite image will contain information from two different features (e.g., house/field, road/trees, etc.). Any statistical analysis of the pixels will therefor not be a valid representation of the site on the ground.

Also don't forget that when properly registered, the part of the image that extends beyond the overlap must be trimmed off in your image processing software. Pixels that contain information from only one image will distort the calculations from the overall image.

As for the statistics:

Classification of Training Fields provides information about the likelihood that a pixel in your selection was actually representative of the overall selection (a patch of dirt in a selection of grass will read as incorrect because it is not representative of the rest of the selection).

If you look at the column 1 "Grass", you will see that there are 19776 pixels in the training selection that are identified as Class Number 1, 0 are identified as Class Number 2, 65 are identified as Class Number 3, 8 are identified as Class Number 4, 8 are identified as Class Number 5, and 74 are identified as Class Number 6. This means that your training selection for "Grass" also contained what the system identified as 65 pixels of "Road", 8 pixels of "Car", 8 pixels of "Trees", and 74 pixels of "Field".

Training Class Performance tells you how the pixels in the image ended up being classified based on the training selections. Note that the Total and Reliability Accuracy rows at the bottom of the table are not properly aligned with the columns above. They need to shift left.

Class Distribution for Selected Area displays the actual results of the entire image classification. You can see here how many pixels of each class were identified, and the percentage of the total image that they comprise.

Again, because your images were not aligned, the classes do not represent the real classes on the ground because the pixels are mixed. If you go back and align and crop the images you should get much more accurate results.

Classification of images is very important because it allows us to quantify the data and make it available for analysis. It is only one of many things we can do with the images. We will post another tutorial next week on producing a vegetation index, which can tell you how much photosynthetic vegetation is in the image, as well as the status of photosynthetic plants.


I hope this is helpful, and not too confusing. Thoughts or questions?