Nuit Blanche community:
@Google+(1984) || @Facebook (334 likes) || @Reddit (1771)
Compressive Sensing @LinkedIn (3867)
Advanced Matrix Factorization @Linkedin (1247)
Paris Machine Learning ( MLParis.org )
@Meetup.com (5722 members) || @archives || @LinkedIn (1723) || @Google+(440) ||
@Facebook (300 likes) || @Twitter (1083 followers)
The Big Picture in Compressive Sensing|| Learning Compressive Sensing ||
Advanced Matrix Factorization Jungle Page ||
Highly Technical Reference Pages - Aggregators ||
These Technologies Do Not Exist || CAI: Cable And Igor's Adventures in Matrix Factorization || Search || Reproducible Research page
Wednesday, February 28, 2007
Tuesday, February 27, 2007
When the Palomares accident took place, I never imagined they used bayesian search theory to find the last empty quiver. According to Wikipedia it so happens that this technique is used by the Coast Guards in search and rescue operations. A similar technique could be used to merge all the information acquired to find Jim Gray.
The search and rescue program seems to now use SAROPS ( a commercial version is SARMAP) where I would expect similar bayesian techniques to be used. I found the other presentation at this SAR conference at Ifremer also very much interesting. The 2006 presentations also present the current SAROPS implementation (link to movie/presentation).
I want to be proven wrong, but I am pretty sure that current tools do not integrate satellite/radar imagery directly into the maps produced to determine search scenarios. It certainly does not integrate other types of imagery (multispectral or other). I would very much be interested in finding out how time is taken into account in these probabilistic maps.
Unlike other approaches, the Bayesian approach maintains multiple hypotheses over time. The probabilistic maps developed for robotics are sensibly similar to the ones needed in the search and rescue case.
[Thanks Cable for the tip]
Friday, February 16, 2007
The Tenacious Challenge:
Maria Nieto Santisteban and Jeff Valenti (the JHU team) have provided a lot of good data. It is good to provide the full problem so that other teams/people can freely take a stab at it without having to read different sources in order to figure out where all the information is. The cross correlation work is really an inference problem based on data fusion (from different sensors and places.) I am sure some of you know somebody who does this well within your campus or organization.
1. Problem Statement
- There is this boat that is following currents (no sail, no engine). You have a model for these currents. The model is shown in some animated GIF here. The model for the current is provided in this dataset. It is a set of elements that are transported from day 0 till day 14. (Time starts at Jan 29, 00h00 GMT)
- The boat has been moving over several days because of the currents.
- There is no known spectral signature for the boat. This means that for every detector used for which the spatial resolution is coarse, we have a signal representing the presence of the boat but we do not know if this is the boat we are looking for or some other object. In particular, radar data indicate the presence of something but we do not know if this something is the boat of interest. The radar resolution is coarse and so is the ER-2.
- Several satellites, planes have flown over the differents areas at different times (see reference section below for bounding boxes). For each of these flights, data was acquired and several hits were obtained. Data from RadarSat1 were taken at day 2.6, RadarSat2 data were taken at day 5.1 and ER-2 data were taken at day 4.8
- In particular, because of the cloud condition, we believe that the radar data are the most accurate ones. Objects detected over the first radar pass (RadarSat 1) can be found here. Objects detected over the second radar pass (RadarSat 2) can be found here. Another set of objects were also detected by the ER-2 but we don't know to what extent it is affected by cloud (in other words, we might be missing some items from this detection scheme). Objects detected by the ER-2 are here.
- Our main objective is evaluating the transport/drift model and identify a potential target of interest.
The Tenacious Challenge, two questions:
- What are the hits on the RadarSat 1 pass that were detected on the RadarSat 2 pass ? We are assuming the following:
- not all hits of the first pass are in the second pass and inversely not all hits on the second pass are in the first pass.
- some hits on both the first and second pass are not following currents (powered boats going from place A to B)
- There is some inherent error in the transport solution. Any solution needs to state how bad this transport solution is.
- Does any pair identified in the RadarSat 1 and RadarSat 2 match an item detected by the ER-2 ?
We realize that the brain is a very nice inference engine, your solution may be just the description of all these data in a telling graphical manner.
2. Reporting your solution:
If you have a solution for this, please put a comment in this entry pointing to your solution (blog, website...) where you state your results and how you arrived to these results. We are assuming the rank of the target indentified is also its name, for instance the third target identified in the radarsat1 case should be called radarsat1-3. Any pair should then be labeled: Radarsat1-3/RadarSat2-17 for instance.
 Bounding box coordinates for ER-2 flights are here.
 Bounding box coordinates for the RadarSat 2 flight (Feb3) are here.
 JHU team website with actual images of targets is here.
Unfortunately, I just re-did a search on the Landsat database (EarthExplorer of the USGS) and the cloud situation is as bad as what Envisat shows except for Feb 12.
Thursday, February 15, 2007
There is nothing better than actual sighting given the right resolution as provided by Quickbird or Ikonos. If we have some confirmation that somehow the cross section of Tenacious to radar is none zero, then it was detected by radarsat if the boat was about that region. Here is the extent of the problem for the visible range)Quickbird, Ikonos, ER-2). Images are from Envisat of the region starting Feb 1. I am not saying we could not see anything, I am saying there is a high probability we did not have a shot of Tenacious even when flying over it.
Maria Nieto-Santisteban and Jeff Valenti at John Hopkins University (The JHU group) have used the ocean current models provided by the OurOcean folks at JPL to create an animated GIF of how markers move with the currents. Relevant satellite and aerial imagery were obtained 2.6, 4.8, 5.1, and 5.8 days after the adopted zero point in time (Jan 29, 00:00 GMT).
The Radarsat images do not suffer from the clouds or fog. They were also taken very early on the search (Jan 31 and Feb 3). Quickbird and Ikonos shots would be useful in evaluating if any of the radar targets are of interest. My assumption is that Tenacious responded to the radar but was probably covered by clouds when visible light satellite or planes (ER-2) passed over it.
Following this thinking, I produced a kml file for the Radarsat images as processed by Maria and Jeff (it needs to be polished, anybody ?) and probably needs to have the ER-2 data. The Mechanical Turk data findings are not available online. One can see part of the kml file directly on Google Maps but it does not display well because it is too big for Google Maps in this fashion.
The major capability provided by the JPL folks is in the ability to remove targets that are really false positives. And so instead of looking at ocean current models and try to fit the targets found by Radarsat, it would be interesting to figure out how targets on Jan 31 found by Radarsat were transported to another position on Feb 3 using the model. By evaluating the distance between targets of Jan 31 transported by the JPL model and actual targets found on Feb 3, we would have a good view of the ones for which the model is accurate. Then, we could evaluate if any of the targets found by Quickbird on Feb 2 and Feb 3 are anywhere close (why use the radarsat images first) It is also of paramount importance that one uses the JPL current models with a grain of salt. This is fluid mechanics after all.
By eyeballing the Radarsat targets and the crosses of the JPL model, one seem to see some similar features pointing to the potential correctness of the ocean current model. Some of targets could be removed using the Quickbird imagery.
Monday, February 12, 2007
The more I am looking at some of the multispectral images, the more I am convinced that the obstruction of the clouds should not be discounted. But more importantly, another issue is the data fusion from different sensors.
Thanks to both the John Hopkins and the University of Texas websites, we have data from a radar (radarsat) or in the visible wavelength regime (ER-2, Ikonos, Coast guard sightings). Every sensor has a different spatial and spectral resolution yet some can see through clouds whereas others cannot. Multispectral could be added to this mix but they suffer from low spatial resolution (lower than the radar) while having higher spectral resolution. Other information such as human sightings by private airplane parties should also be merged with the previous information. [ As a side note I have a hard time in convincing the remote sensing people that spatial resolution is not an issue as long as we can detect something different from the rest of the background.]
Finally, the other variable is time. Some areas have been covered with different sensors at different times. This is where the importance of the drift model become apparent.
The state of our knowledge of what is known and what is not known becomes important because as time passes by, it becomes difficult to bring about the resources of search and rescue teams. I have been thinking about trying to model this using a Maximum Entropy (Maxent) but any other modeling would be welcomed I believe. The point is that when a measurement is taken at one spatial point, we should look at it as if it were a measurement that will vary with time. The longer you wait, the more you won't know if the Tenacious is there or not.
For those points were we have identified potential targets, we need to give them some probability that Tenacious is there but we also know that if we wait long enough, there will be a non-null probability to have gone away from that point. Also, this formalism needs to allow us to portrait the fact that no measurements were taken over certain points in a region where other points were taken (the issue of clouds). This is why I was thinking of implementing a small model based on the concept of Probabilistic Hypersurface a tool designed to store and exploit the limited information obtained from a small number of experiments (a simplified construction of it can be found here). In our case, the phase space is pretty large, each pixel is a dimension (a pixel is the smallest pixel allowed for the spatial resolution of the best instrument). All pixels together represent the spatial map investigated (this is a large set). The last dimension is time. In this approach the results of JHU and UCSB as well as the Mechanical Turk could be merged pretty simply. This would enable us to figure out if any of the hits on Ikonos can be correlated to the hits on Radarsat. But more importantly, all the negative visual sightings by the average boater could be integrated as well in there because a negative sighting is as important as a positive one in this search. And if computational burden become an issue for the modeling, I am told that San Diego State is willing to help out big time.
What I am proposing could already be implemented somewhere by somebody who is working in areas of bayesian statistics, maximum entropy techniques. Anybody ?]
Saturday, February 10, 2007
Here is the screen grab of a search made on the EarthExplorer website of the USGS.
Landsat 5, Jan 28
Landsat 7, Jan 29
Landsat 5, Jan 30
Landsat 7, Feb 1
Landsat 5, Feb 2
Landsat 7, Feb 3
Landsat 5, Feb 4
The Envisat images were taken more often over the bay area.
The screen grab of the search results show a preview of the images on the left. Many of these views are obstructed by a large cloud cover.
Friday, February 09, 2007
This is what the bay area looked like on Feb 3, 2007 as seen by Landsat 7 at 10:36 AM (local time) [ This is the low resolution of the actual image]
Thursday, February 08, 2007
After a long and extensive back and forth communication with the folks who handle the EO-1 systems (the Hyperion and the ALI), we figured that there were no more option on the hyperspectral side of things. While multispectral sounds insteresting, we would be fighting with the resolution issue.
Initially we thought that with EO-1, even if the resolution was of the order of 30 meters (GSD), we would be expecting to pick up either the green dye or the boat because of the additional information provided by the 220 channels of Hyperion. In other words, use the spectral information to provide sub-pixel resolution. The thinking was then to take images over the bay area in order to calibrate either the green dye or the boat as a target (by having people on the ground reproduce these targets). One would then use these signatures and retask EO-1 to run an autonomous detection program (comparable to the one NASA devised for detecting volcanoes eruptions.) Since we figured it was not feasible within the time constraints we had, we took another look at the multispectral data. We found that Landsat 7, Landsat 4 and 5 and Envisat/Meris had data on the region of interest for the past two weeks. The ground resolution was still an issue.
Then Lawrence Ong (NASA GSFC) came back with this amazing insight/example: Chesapeake Bay on a Sunday when people are using their boats as seen by the ETM+ of Landsat 7.
You can easily see the boats and their wakes. Multispectral imagery should be added to the mix already in place (Radarsat, Ikonos, ER-2, drift models,...).
Any of these ideas deserved to be remembered for future search and rescue efforts.
Using the EarthExplorer UI from the USGS, I did a search on the possible hit for multispectral imagery over the area of interest and the time of interest. This is what I found:
- ETM+ (Enhanced Thematic Mapper Plus) provides High-resolution (15- to 60-meter) multispectral data from Landsat 7 . The bands and resolution are listed below:
| || |
For the area of interest and the period of interest, there are five images (out of 20 , the rest covers land)
- Landsat 4-5 Thematic Mapper (TM) is a 30- to 120-meter multispectral data from Landsat 4 and 5. For the area of interest and the period of interest, there are nine images (out of 20, the rest covers land).
- No hit for ALI or Hyperion on EO-1
- Advanced Very High Resolution Radiometer about 20 hits, but the resolution is only 1 km.
Envisat has on-board the MERIS camera. MERIS is a programmable, medium-spectral resolution, imaging spectrometer operating in the solar reflective spectral range. Fifteen spectral bands can be selected by ground command, each of which has a programmable width and a programmable location in the 390 nm to 1040 nm spectral range. Meris specifications can be found here.
Accuracy: Ocean colour bands typical S:N = 1700
Spatial Resolution: Land & coast: 260m x 300m
Swath Width: 1150km,
Waveband: VIS-NIR: 15 bands selectable across range: 0.4-1.05 micrometers (bandwidth programmable between 0.0025 and 0.03 micrometers)
When doing the search through the GUI called MIRAVI (it seems to only work with Internet Explorer), I found seven entries. A lot of them are very cloudy.
Tuesday, February 06, 2007
This NYT piece on Orbiting Debris becoming a Threat is interesting but it does not point to the real issue in my opinion. There is a good likelihood that some of the debris created by collisions between elements in low earth orbit (LEO) produce junk in Geostationary Orbit (GEO). This orbit is very important to the whole communication infrastructure.
Sailor's dye marker of Jim Gray's Tenacious boat ? as identified by Istvan Csabai. Most information on the search canbe found at the Tenacious blog.
[Update: according to the Coast Guards, it's not it]
Sunday, February 04, 2007
This is how we evaluated some of the images we gathered from Starnav 1.
After receiving the first images from Starnav 1, we figured that most of the surrounding of the camera was shining too much light into the camera. After going through the AVIS viewer and playing with the filter threshold, we could find other unknown things being in front of that camera (item B and C).
Item B was very difficult to find because it was really only a few pixel above a certain background and one had to remove brighter area around it. Only then, one could see the round shape of it.
Friday, February 02, 2007
In a previous entry on prompt critical space debris, I was really mentionning the possibility of igniting a chain reaction in low earth orbit by the continuing add-on of space debris in LEO.
At some point, a colleague of mine and I looked into devising a transport equation similar to the one we use in neutron transport theory in order to evaluate what, we in the nuclear engineering world call criticality studies. Being critical is one thing, but when a system is "prompt critical", delayed neutrons cannot slow down a chain reaction yielding a very hazardous situation.
It would be worth looking into in light of the recent chinese test. Sure
"it does not pose a threat to any country"for the moment, except that we are talking about a potential substantial increase of particles in orbit beyond the sum of all the debris from just the target of that test. What is really interesting is that there is a loss factor: drag from earth atmosphere has the ability to remove some of these debris over time. Yet, we don't really know how to fit experiments with observations. Some are attempting to do just that within the Robust Mathematical Modeling framework.
This is timely, especially four years after what happened in orbit.
- The controlled way where one builds a material called metamaterial between the sensor and the object. This metamaterial will bend light rays. This metamaterials has a negative refraction index
- The uncontrolled way where bending of light is performed either using turbulence or gravity.
- And then there is the assumption made on the image (prior). One can either use several shots to "average" to the object of interest or one can use compressed sensing in random lens imaging.