Urthecast: When YouTube meets Google Earth in glorious HD

This is probably one of the most disruptive stealth projects that has surfaced over the last few years and combines some feature of Google Earth with the playback and video functionality of Youtube. We caught up with the team and asked them for some questions over how the solution will work and what challenges they did encounter.

Can you describe what Urthecast is?

UrtheCast is a new web platform that will allow people to experience events unfolding around the world in a whole new and highly engaging way. Working in an exclusive partnership with RSC Energia and Roscosmos (the Russian Space Agency), UrtheCast is building, launching, installing, and will operate two cameras on the Russian module of the International Space Station. One generates a large amount of image data of the earth, while the other is a high resolution video camera providing unprecedented video footage of earth form space at high resolution. Data of the Earth collected by our cameras will be down-linked to ground stations around the planet and then displayed in near real time on the UrtheCast web platform and also distributed directly to our partners and customers.

The UrtheCast web platform will have the following elements: (1) it will have a "GoogleEarth" like interface however the image data will be fresh and constantly being updated enabling you to see the changes over time); (2) a "YouTube" like experience that shows videos tagged to geographic locations and a similar search functionality (enabling the ability to search videos by location, category, subject matter, etc.); and finally (3) a real-time social layer added to generate wide consumer interest (the website will integrate with popular social media platforms such as Facebook and Twitter) and show tweets and user uploaded content tagged to geographic locations on the base map. that appear in near realtime (e.g., as they are tweeted). In addition, we will open up our API and allow developers to use our data to create applications ("apps") for mobile devises like iPhone, iPad and Android.

What kind of challenges have you been facing developing the system (dust, death rays, zero gravity, stability, cooling etc)?

The cameras must be designed to withstand the space environment. This includes, for example, the ability to handle the large temperature swings that occur in space as you go from day to night, the radiation levels that equipment is exposed to in orbit, and also withstand the loads and vibrations during launch. The team we have building the cameras are the Rutherford Appleton Laboratory (RAL) in Oxfordshire, UK and MacDonald Dettwiler & Associates (MDA) in Canada who both have tremendous heritage in building space hardware for many years so have dealt with these challenges many times before.

Other examples of specific challenges include ensuring we can point the video camera sufficiently accurately at selected ground targets and also dealing with the vibration environment of the ISS (due to crew movement and other operations occurring on the station). We have therefore needed to add star trackers that are used to get a more accurate knowledge of the ISS pointing direction at the location where our cameras are mounted (these star trackers are small cameras that image the stars and use the star patterns to work out the pointing direction). In addition we have added gyros to our video camera as well as specially designed vibration isolators to attenuate the vibrations from the ISS.

What does URthecast use as processing unit?

We have two cameras, the Medium Resolution Camera (MRC) and the Video Camera (also called the High Resolution Camera - HRC). Each camera has a Data Compression Unit (DCU) that takes the raw data and compresses it using JPEG2000 into a stream of JPEG2000 files. The data streams from each camera goes into a Data Handling Unit (DHU) that we have on the inside of the Space Station. The DHU has a an Intel Core2Duo processor running QNX and includes a large capacity Solid State Drive (400 GB) where we store the image and video data. Once we are in view of a ground station, which will happen at least once per orbit (90 minutes), we will send the image data as well as all related ancillary data needed for image processing to the ground.

How is data being transmitted from the ISS to the receiving station(s) and what are the fail safe/redundancy options?

We use a high speed data downlink that has been recently installed on the ISS Russian Segment by RSC Energia (Russia's prime contractor for the ISS Russian Segment). The frequency of this RF link is X-band (~8 GHz) and it operates at 100 Mbps. We will have ground stations located around the world to receive the data. We have not yet finalized the locations, but broadly speaking, we will have three or four in Russia, one will likely in the UK and one in Canada and two in the southern hemisphere (likely South America and Australia).

What kind of hardware is available in the receiving station(s); server, connectivity, storage?

The ground station will be very typical of what is in use today for most other earth observation satellites in space. In fact, we expect to use some existing stations, under a services contract, where we will add the necessary hardware & software allowing them to pick up our signal. These ground stations will have an antenna that has a diameter of ~5-7 m typically, and they will auto track the satellite as it flies overhead and will have the capability to local on to the RF signal to maintain the necessary pointing accuracy to keep a good RF link. They will have the necessary RF equipment to sync up to the signal and will locally archive the downlink pass (~6 GB of storage needed for one downlink pass) . This data will then be sent to the UrtheCast processing centre using existing high speed lines (fibre).

What kind of OS is this running on and what middleware are you using?

Once the data is in our processing centre, we perform all kinds of processing steps to get the data ready to be pushed out on to the web. This will include decompressing the data, as the cameras have a built-in JPEG2000 compression engine that compresses the data before it is stored on-board and downlinked. For the web, we will use Amazon Web Services (the Amazon cloud system) to store the vast amount of data our system and move it around the world and we will also use the cloud to distribute some of our processing steps as well.

What happens if something goes wrong or something gets broken?

First of all the cameras and associated electronics are made by our partners the Rutherford Appleton Lab (UK) and MDA (Canada) who have extensive experience with making highly sophisticated equipment for space and making it highly reliable. They have tremendous heritage that they bring to this, for example MDA has built all the Canadian equipment on the ISS like the Canada Arm & mobile servicing platform, and RAL has built nearly 200 different space instruments.

They are using the same processes for our cameras that they would use for equipment that is going on a dedicated satellite where there is no possibility of maintenance if anything should go wrong. So this helps to reduce our risk considerably. We also have some redundancy in our system in selective areas to allow for some failures that would result in perhaps some degraded performance. It is also possible to upload all new software and firmware if necessary to allow for changing the operations, so we have a lot of flexibility in reconfiguring the system if needed to deal with failures.

But with all that being said, since the equipment is going onto a manned space station, there is also a possibility to do some maintenance by the cosmonauts if something were to fail. This could mean replacing one of the electronics units for example. There are Russian Progress supply ships constantly going up to the ISS and if something were to fail, we may be able to send up a replacement unit. So this further helps to reduce our risk.