Citizen Science and TESS Data Release


#1

Hi everyone, I have been watching your show for a few months and I really love your enthusiasm for space and science.

One piece of news you didn’t mention in last show (hardly surprising because there was so much other news) is the first public release of data from TESS (Transiting Exoplanet Survey Satellite). This is something that anyone can get involved with because there is project to look through the data on the citizen science website Zooniverse. I expect this and many other citizen science projects are things that people in the TMRO community would like to take part in and maybe something you could talk about on the show.

Looking forward to your next program,
David


#2

Good data – I’ll see if we can add this to some show notes for a future epicsode


#3

Might be cool to get someone from Zooniverse to come on. Citizen Science is such a cool thing and we should really expand the hell out of it.


#4

Tucson


#5

Hi Ben and Jared, thanks for your replies. This is really cool! TESS is already producing loads of good data, with over 200 objects of interest just from one month of observations. Many of them could be exoplanets.


#6

The best data will take years. We need to see a planetary crossing multiple times in order to measure an orbital period. The habitable zone planets will be in the 1-4 year period range. I’m really eager to see a star system mapped out well enough to infer non-observable, out-of-plane planets by gaps in the orbital resonance pattern (like how neptune, pluto, and the belt were found). That’ll probably take at least a decade.

I’m also betting that planets with big moons (like Earth and Pluto) will be more habitable, so I want to see how frequent a planet with an observable moon crossing happens. I guess that will take many decades to spot in the data.


#7

Odd question but could we face too much data? after all there is a limited amount of processing power on the planet … I know it’s growing rapidly I’m just wondering what with TESS, JWST, Hubble, Kepler, etc not to mention the data from earth bound sources and of course landers, rovers and orbital recon missions the amount of data must be staggering and is only going to get more massive … has anyone taken into account the amount of processing that is going to be required? Or have any idea of how long it will take? I assume we are not going to be able to go through this stuff real time with multiple observations required to map a singe exo-planets orbit. We are/have reached the max we can do with silicon based processors … hence why we now run multiple cores to get faster rather than higher clock speeds so unless there is a new breakthrough in nano-meter chip design or we fully crack quantum computing (or less likely biological) we will be limited to the numbers of cores we can have running at say around 4Ghz @ 2 threads a core. Not to mention of course storage and distribution of said data (and the fact that no geek in history has built a computer and not run a game on it :P) maybe I’m missing something … hope I am; But could we be in danger of generating more than we can handle?


#8

This is already happening in Bioinformatics. When searching for new DNA patterns, each new pattern found exponentially (e^n) increases the number of searches needed to determine if a pattern has already been found. Since Moore’s Law of Computing is only (2^n), no amount of computing increase will ever be able to catch up (till we run out of new DNA to add to the search). The more computing power you apply to the problem, the more it resists analysis.

In relation to exoplanets, there’s another problem to look at however. A lot of these data sets are immune to computer analysis. A person is required to be able to spot the pattern. There’s some interesting “wisdom of the crowd” analysis systems which have been put into place to help against this issue. The MMO Eve Online for example has built a mini-game into its game which gives player real data and returns the player’s analysis to the scientists. Check out this video:


#9

That’s seriously cool. I was expecting distributed computing solutions to up the number of cores available … or something like the bitcoin mining systems using lots of GPU’s.

I didn’t realise that exoplanets was such a hard problem for a computer program to deal with I guess that will shake out as we move towards true AI … although I’m sure just due to the different way of ‘seeing’ things there will always be somethings a person is better at than a machine. At least until we are well past the singularity.


#10

Another problem is storing huge amounts of data. This is already a major issue at the Large Hadron Collider (LHC). The LHC produces data at such a fast rate that only a tiny fraction can be stored and algorithms have to process the data in real time to decide what to keep.


#11

Supercomputers maybe a better option than million person sorting through bulk data.
Data storage and backup are ok
Jpl archive is massive-


#12

There’s also USGS science stuff
https://www.usgs.gov/science-support/osqi/youth-education-science/citizen-science