SLS: It’s all about freedom of choice… and $/GB

openio-sls

We are very excited here at OpenIO… today we are announcing our first ServerLess storage appliance, the SLS-4U96. I know, it sounds very buzzword-ish but there’s no better word to summarize what we are doing with SLS!

SLS appliances make it possible for a larger number of use cases and organizations to adopt our core SDS technology, by simplifying the infrastructure, improving the overall ease of use, and reducing TCO. Sounding too much like a hard sell? Well, take a few minutes to read what I’ve got to say, then you can decide.

Let’s start from the beginning

If you’re familiar with OpenIO, then you already know that we started this venture a long time ago. We developed this software back in 2006, and the first infrastructure based on our object storage technology (now SDS, or Software Defined Storage) went into production in 2009. Time passed and by 2014, that first installation had already reached 10PB. In 2015 we launched OpenIO, and it’s been growing steadily ever since, both in the number of installations, and in the number of PBs under management.

We think different

We have accumulated a lot of experience and made many choices along the way. The OpenIO team has a clear vision of our goals and how to achieve them. It’s not only about having an open source product—many startups think this way now—but we are building a strong ecosystem around our core technology enabling our customers to improve their business by giving them options. (Take a few seconds to look at the different OpenIO editions to see what I mean.)

We designed the core of SDS to be faster and more agile than any other object store, and we saw, from the beginning, that CPU usage in SDS clusters was very low. Thanks to our Conscience technology, we have managed to obtain the best possible performance from the infrastructure, and our Grid for Apps lets customers run applications directly on the storage infrastructure.

You can think about SDS as a data-driven, hyper-converged infrastructure, with the difference that it is more scalable than any HCI, and your application runs with no hypervisor or operating system to manage it. In practice, this is similar to what you can obtain from services like Amazon AWS Lambda, but more sophisticated, and on premises.

Applications can be triggered by events, and Conscience technology makes sure to schedule and balance jobs according to available resources. There are many use cases, including all types of data transformation and processing, ranging from file/object scanning and big data analytics, to real-time video transcoding, or even AI and deep learning.

Back to the basics

Grid for Apps is an amazing technology that takes full advantage of unused cluster resources. But not all applications are designed to run on the storage system, and infrastructure costs can be easily driven up by resources you don’t use; and this isn’t the biggest issue.

We looked at how our competitors address this problem, and we found that their approach was not sustainable in the long term. Our R & D team worked hard to solve this problem, but we think we’ve found a great solution.

Putting many large capacity disks in a single server is the usual approach for driving down $/GB. Sometimes we use 80 or 90 disks per server, given the current intel CPUs and network speeds, it’s quite feasible. But this approach has its drawbacks, notably in performance.

The biggest challenge is not performance. An 80-slot server full of 8TB disks gives you 640TB of storage. No matter what size your cluster is, this is a huge failure domain, and cluster rebalance after a failure can take a long time, impacting overall performance. Even though OpenIO avoids cluster rebalancing, this solution is unacceptable, especially for small clusters, simply because of the size of the failure domain.

The first thing any customer looks at when it comes to object stores is $/GB. TCA is easy to find but TCO is a different story; it depends on several factors.  Since object storage is all about long-term infrastructure sustainability, and protecting the growing amount of data you store, it takes some time to analyze which factors can lower TCO.

This is exactly what we’ve done. We’ve identified some important areas to concentrate our research on:

  • Power consumption:  the lower, the better.
  • Datacenter footprint: every inch counts.
  • Ease of use: in managing several PBs per SysAdmin.
  • Flexibility: $/GB first, but without sacrificing performance.
  • Granularity: Buy only what you really need today.

We looked at a lot of elements, such as Kinect drives, low-power CPUs, dense chassis, and so on. They all have their pros and cons, but none of them was sufficient to meet all of the above needs.

But we found a solution.

Introducing SLS-4U96, the ServerLess storage appliance

Today we are announcing the first product of SLS family: the SLS-4U96. We believe we have found the key to ensuring the best TCA and TCO, while also delivering high performance, ease of use, flexibility, and granularity. SLS is born from a partnership with Marvell, who shares our vision, and who provided the hardware components needed to build the most compelling solution for object storage to date.

The architecture is based on what we call nano-nodes. These are very small ARM-based computing devices that sit in front of an HDD or SSD. Each one consumes only 3W and provides enough CPU, RAM, flash memory, and high speed connections to easily run the SDS software. In this way, we have the smallest possible failure domain (1 HDD) without compromising on performance and power consumption. We use flash memory to speed up metadata access, and the on-board nano-node power management can turn off the disk when not in use, saving additional power.

All nano-nodes are connected to two 6-port 40Gb/s switches, which can serve back-to-back chassis connection for expansion and front-end connectivity. SLS4U96 provides N+1 swappable power supplies and fans to eliminate every single point of failure.

Nano-nodes are very cost effective, and the rest of the chassis has all the components usually found in a datacenter rack, but in a single 4U. Multiple SLS-4U96 units can be stacked to get higher capacity in a single rack. The first SLS-4U96 appliances are available with 8 and 10 TB disks, with 12TB disks available soon. With 10TB drives, a single SLS-4U96 can provide 960TB raw storage (or 740TB with erasure coding enabled).

With this appliance, we have covered all the major points that can help reduce TCO:

> Power consumption: each nano-node consumes less than 3 Watts, and is much more efficient than an x86 based system. Power management allows you to save power when data is accessed infrequently.

> Datacenter footprint: 96 disks in a 4 rack units is dense, don’t you agree?

> Ease of use: all the features of our innovative object storage technology are available on SLS. Auto-discovery of new nodes, a simple GUI, APIs, and CLI, as well as our Conscience technology and the extensive set of APIs and file access methods are all included with the software used to manage large SLS configurations.

> Flexibility: SDS data protection features (replication and erasure coding) and lightweight backend design remain intact, as well as tiering-to-the-cloud functionality. The SLS nano-node architecture provides the best possible performance thanks to quick access to metadata, which is always stored on flash memory, and thanks to high-speed connections and a 40Gb/s backend.

> Granularity: SLS can be expanded one disk at a time, and SDS allows you to build configurations with different disks types, offering customers freedom of choice on what to buy and when.

In addition, there is another advantage that contributes to lower TCO. Thanks to our nano-node architecture and the one-disk failure domain, it is possible to use a different service policy on this type of hardware. By adopting a 13/10 erasure coding data protection scheme for a fully populated SLS appliance, up to 22 nano-nodes can fail before there is any risk of data loss. This means that, no matter how large the configuration, a customer could plan a visit to the datacenter once a month (or even less often) to swap failed disks. Auto-discovery and auto-configuration of new nano-nodes will take care of the rest with minimal human intervention, saving a lot of time for everyone involved.

Conclusion

Is it clear now why we chose the name ServerLess? Nano-nodes rock, and SLS appliances, in conjunction with the ease of use and scalability of SDS, make object storage practical for a wider range of use cases. $/GB comes first, but also with freedom of choice, performance, and improved efficiency, end users have many more options to build a sustainable infrastructure, regardless of the size, the growth, or the workloads you plan to run on it.

And, BTW, I forgot to mention that we are offering to do all of this for $0.008/GB/month!!! (a SLS-4U96 full of 8TB disks -590TB usable with erasure coding enabled- and 3yr SDS support contract).

SLS-4U96 is available today and the first units have been already shipped to our customers.  Want to know more? Ping us on twitter @OpenIO or send a contact request to sales@openio.com

Recent Posts
mautic is open source marketing automation Cloud storage concept design with flat modern iconsimg_1660