Metrics and Growth
When I first started on the project, we had less than six months before the first production satellite was scheduled to begin providing data for processing and delivery for what we hoped would be our premier customer, the 2010 Vancouver Winter Olympics. The data center was ready on time. Unfortunately, the launch of the first production satellite was delayed and did not meet the deadline.
We were fortunate that the demonstrator/test satellite, designed for an orbital life of less than 1 year, was still operating after nearly 2 years in orbit. Not wanting to miss the opportunity supporting the Olympics, we added an additional ground station to reduce latency and started delivering service to the security services of the Olympics regarding shipping activity in the waters off the Vancouver coast. The project was a big success and we received some very favorable publicity.
This first use of the data center generated approximately 3,000 messages a day from one hundred or so ships off the northern west coast of North America. This single test satellite and the messages provided was easily handled by the new data center.
With processing power in excess of 2.1 Terra FLOPS, 3 TerraBytes of storage and 100 MBPS of internet connectivity, the data center was originally designed to support up to six high duty cycle satellites simultaneously and to deliver 500,000 messages per day from ships worldwide.
Over the course of the next 5 years, we made minimal annual investments in hardware at the data center. We focused on process and procedure improvements as well as software efficiency to support a constellation of satellites that grew from the one test to six production satellites.
Our growth continued as we added customers and satellites. Our experience from the previous years taught us that the processing needed to deliver our services had a relatively stable baseline but it also had some significant spikes where the data center, as designed, was not adequate to deliver on our service level commitments.
An easy, but expensive solution would have been to add new hardware resources to our data center. We chose, however, to leverage the rapidly evolving ‘Cloud Computing’ offerings to give us an elastic processing capacity we could quickly expand into and only pay for processing ‘as needed’.
Since the original design of the data center was a ‘Private’ Cloud Computing environment, utilization of ‘Public’ Cloud Computing services was relatively easy and straightforward for the staff. We developed the planning processes, software tools and the procedures that allowed us to add computing power to our infrastructure on an as needed basis. When the extra capacity was no longer needed, we could contract capacity to back up our baseline data center. As a result we had almost no additional capital investment and only a small increase in operating costs to cover the pay as you go ‘Public’ Cloud Computing usage.
As of 2017 the number of satellites had grown to 10. The original data center had only minor upgrades and additions since its inception. Our processing ‘Private’ Cloud Capacity, however, had nearly doubled to 4 Terra FLOPS but it had become distributed across 4 regional and one virtual facilities globally. We had developed an elastic processing capacity that allowed us to expand to meet any forecast need within days or even hours if needed. This infrastructure was delivering over 3,000,000 messages per day from over 100,000 ships worldwide.
In conjunction with our partners, we had designed a next generation of satellites with capacity that pushed much of the processing we originally did in the data center to in-orbit processors on 65 satellites to be launched across the coming years.
The original data center was being re-tasked to provide much broader global distribution of our products with regional processing and distribution to provide the highest service levels possible. Plans and designs were being developed to add ‘Big-Data’ and ‘Information As A Service’ offerings through software enhancements to increase the value add that we brought to the unique data set collected from the next generation of satellites.
The new generation of satellites are now in orbit. They are operationally detecting and processing information from over 150,000 ships globally per day. The computing power of the original data center, that started out delivering 3,000 messages per day ten years ago, now delivers over 65,000,000.
Philip L. Miller Page 2 of 2 March 25, 2020