PiCloud is shutting down. Read the official announcement

PiCloud Platform

PiCloud gives you a supercomputer at your fingertips.

In only a couple of lines, the PiCloud Platform enables you to leverage thousands of cores of computational power, and terabytes of data storage without having to manage, maintain, or configure servers.

The PiCloud Platform is ideal for high performance computing, batch processing, and scientific computing applications.

You interface with PiCloud using our clients: our cloud library specifically designed to offload Python functions; and our command-line interface (CLI) which can execute any *nix-compatible program including those written in Java, R, C, C++, MATLAB, and more.

How it Works

Click →
Create Job
Your jobs
Alice's jobs
Bob's jobs
Job Queue

You create a unit of computational work we call a job. We add your job to our queue, and when there's a core free, automatically run it. If your jobs can't wait, you can get your own queue, and tune the number of cores you get with a single click using our Realtime Cores feature. The next section shows how to create a job from our clients.

How to Use it (in a Nutshell)

				>>> # define your function (or import it)
			  	>>> def add(x, y):
			  	...     return x + y
			  	>>> # import our Python client
			  	>>> import cloud
				>>> # create a *job* that runs add() on the cloud
				>>> job_id = cloud.call(add, 1, 2)
				>>> print job_id

				>>> # check the status of the job ('queued', 'processing', or 'done')
				>>> cloud.status(job_id)
				>>> # get the result
				>>> cloud.result(job_id)
				>>> # run add() across many datapoints
				>>> cloud.map(add, datapoints)
				host:~$ # create a *job* that prints hello, world to standard output
			 	host:~$ JID=`picloud exec echo hello, world`
			 	host:~$ echo $JID

			 	host:~$ # check the status of the job ('queued', 'processing', or 'done')
			 	host:~$ picloud status $JID
			 	host:~$ # get the result
				host:~$ picloud result $JID
				hello, world

Cornerstones of the Platform

Easy to Use

PiCloud will get you on the cloud in as few as two lines of code. But ease shouldn't replace control, and you'll find that our clients are highly configurable to suit all workloads.


Code in full confidence that your service will scale. Behind the scenes, we are automatically scaling our service to match your computational needs from no load to peak usage. Using Realtime Cores you can take control, and get the exact number of cores you want in minutes.


PiCloud brings a highly robust computing environment to the cloud. While infrastructure providers (Amazon, Rackspace, ...) make it your responsibility to build robustness to hardware and network failures into your applications, we make it ours. We've built redundancy and reliability into every corner of our service so you don't have to. The PiCloud Service Level Agreement commitment is 99.9% availability.


Security isn't a choice with PiCloud. All communication with PiCloud is encrypted using the Secure Sockets Layer (SSL) protocol. Your code and data on our cluster is protected by multiple layers of security including POSIX permissions, Linux Containers (LXC), AppArmor, and Kerberos. In addition, we deploy the latest security patches, and employ industry-standard security techniques. PiCloud meets ITAR compliance by using the AWS GovCloud by request.

Cost Saver

Never again will idling servers drive up your infrastructure costs. With PiCloud, you don't have to think about server utilization, because you only pay for the exact amount of computation time (down to the millisecond) and data storage you use. We make it our responsibility to maximize utilization by sharing resources among users.


Automated Deployment

With PiCloud, you'll never boot up servers ever again. We shuffle your jobs to our highly redundant systems. When there's a free core, we deploy your job, bringing in all the dependencies it needs (environments, volumes, source code), and when it's done we bring it all back to you (output, logs, errors).

Tunable Performance

PiCloud offers a variety of different compute resources to run your programs with. Using our API, you can switch between core types, and use multiple cores for a single job, allowing you to tune and optimize your performance easily.

Core Type Compute Units 1 Memory Disk Max Multicore 2 Use Case
c1 (default) 1 300 MB 15 GB 1 Simple tasks
c2 2.58 x 2.5 800 MB6.4 GB 30 GB 8 Number crunching
f2 5.5 w/ HT16 x 5.5 3.7 GB59.2 GB 100 GB 16 Well-rounded
m1 3.258 x 3.25 8 GB64 GB 140 GB 8 Memory-intensive and I/O-bound tasks
s1 3 0.5 to 2 300 MB 4 GB 1 IP rate-limited web scraping

1 A compute unit as defined by Amazon provides "the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor." All of our cores have 64-bit architectures.

2 Using multicore, workloads can use N cores of the same type to get access to N times the resources (compute power, RAM). Hover over the multicore column to see the maximum resources a single unit of work can utilize. See documentation for more details.

3 Each s1 core has a unique IP address, which makes it ideal for scraping. Because it offers a variable amount of compute power, it should be used only for network-bound tasks 3 Each s1 core has a unique IP address, which makes it ideal for scraping. Because it offers a variable amount of compute power, it should be used only for network-bound tasks where a unique IP is required.

Python Integration

With our cloud library's deep Python integration, our platform is ready to execute your high-performance computing Python applications in the cloud. It only takes two lines of code and less than 5 minutes to begin offloading workloads to us.

Language Agnostic

We understand how important it is that a platform-as-a-service (PaaS) support all the various programs and libraries you need for your application, regardless of what language they were written in. Between creating a custom Environment, and using our command-line interface, you can use almost any programming language, program, or library.

Scientific Computing Ready

PiCloud has over 500 packages installed by default so that moving scientific computing applications to the cloud is seamless. See How to Use Scientific Tools (numpy, scipy, pandas, ...) on PiCloud. You can see a list of all default packages installed by examining the contents of our Base Environments. If a package you need is missing, you can create your own Environment.


PiCloud consolidates your computational history into one simple interface. You don't monitor servers; you monitor the workloads without mucking in server details. You can examine the innermost details of any job including standard output, standard error, runtime, and exception tracebacks. You can even track a job's CPU, memory, and disk uage in realtime.

PiCloud Jobs Dashboard

Comprehensive list of jobs.

Job Stats Realtime CPU Usage

Left: System-level information collected for each job.
Right: A graph of a job's CPU usage over time.


Ever wonder how many resources you're using in aggregate and over time, but didn't have the bandwidth to build the system for it? We make getting a bird's eye look of your usage a single click away.


Comparison with Amazon EC2

"[PiCloud] cuts our operational costs for managing the infrastructure by over 50%"

— Gary Rose, TiVo.

For those already familiar with Amazon EC2, or another virtual server provider (often known as Infrastructure-as-a-Service, or IaaS, providers), this section describes similarities and differences with PiCloud to be aware of.

IaaS providers require you to handle everything from the server on up, from the operating system to implementing methods for distributing workloads across servers.

The overarching benefit of PiCloud is that you work with the job abstraction. You create jobs in the programming languages you're already using to describe your computation, and we deal with everything else it takes to get your computation running on the cloud. Here is a table summarizing the important differences:

Category Amazon EC2 PiCloud
At the end of your first lesson... You'll have booted up a single server and begun configuring it. You'll be running your actual workloads as jobs in the cloud.
Scaling Up You'll develop a system for booting up and auto-configuring servers. Just run jobs in parallel; they start in seconds and we automatically scale for you. If you need even more cores, just click the number needed in our Realtime Cores interface.
Scaling Down You'll stop or teardown your servers. Be sure not to lose any data! Nothing to run? Then you aren't paying for computation.
Monitoring You'll need to build/install a monitoring framework that likely gives you server-level data. You get application-level visibility into each job with no setup.
Building tolerance to server failure You'll need to build redundancy for your data and compute nodes. All data, compute, and management nodes are replicated.
Building tolerance to datacenter failures You'll need to replicate your application across multiple datacenters. We replicate nodes across multiple AWS availablity zones (independent datacenters).
Handling Datasets You'll survey the field, pick, implement, and manage datastores. We've optimized two datastores ideal for HPC (Buckets and Volumes), but you can always use your own.

Supported Clouds

Amazon Web Services


PiCloud's primary deployment is on top of Amazon Web Services (AWS) in the US-EAST Region. If you are already using AWS in the same region, your latency will be minimal, bandwidth will be high, and you will not have to pay for data transfer costs.

Private Datacenters

PiCloud offers an installation for private datacenters. We license our platform on a per core basis. The minimum deployment size is typically 200 cores. To learn more, please contact us.

Case Studies



Quantum computing pioneer, D-Wave, has sped up machine learning workloads 1,000 fold using PiCloud. Read it here →

Zinc.TV (Division of TiVo)


Internet television dashboard Zinc.TV aggregates daily video content from over 500 web properties using PiCloud. Read it here →

Flanders Institute for Biotechnology

Applying PiCloud to bioinformatic applications. Specifically, building a database and pipeline for comparative analysis of a subset of genes across different E. coli genomes. Read it here →