By now, you’ve likely heard about Caringo Swarm Hassle-Free, Limitless Object Storage: what it is, how it performs, its built-in continuous data protection, and how it’s helped all kinds of folks in a variety of industries. The concept of object storage is quite powerful, both from a user perspective and from an administrative one. We’ve discussed the simplicity of actually running and maintaining our object storage solution (one admin, multiple petabytes…that’s pretty sweet); however, today I’m going to tell you about what Caringo Swarm is like from a user perspective.
After all, a storage solution isn’t really much good if it’s tricky to get your data in or out. It’s perhaps even worse if you can’t find your data once you’ve put it in there. And what about protecting your data and limiting who can do what with it?
Thankfully, with Swarm’s Content Portal, we make all of these tasks easy.Getting Data In and Out
Putting data into Swarm is as easy as drag-and-drop, and we support tagging your upload with custom metadata however you see fit for easier access in the future. Once your data is safely stored in Swarm, it can be accessed again at any time. And as always, data that you have uploaded via the Content Portal is equally accessible via S3, NFS, or any custom application you have talking to Swarm.Finding Your Data
Navigating is as simple as clicking through your computer’s native file explorer. And, we take it one step further by supporting Collections. You can think of a Collection as a “smart folder,” similar to a bucket, that presents all of the data matching a set of criteria that you specify. We can even search based on the customized metadata you added on upload (watch The Power of Metadata in Object Storage webinar recording). Metadata allows you to organize your data any way that you like for easy access. Feel free to pat yourself on the back for adding those tags, you’ve earned it!Guarding Your Data
Furthermore, once a user has logged in, that user can only see the tenants, domains, buckets and content that has been allowed. If you happen to be in charge of a bucket, domain, or tenant, setting up security policies is a snap with our policy wizard. You can even set usage quotas based on storage and bandwidth consumed.
All of this accessibility, searchability, and protection applies to data you’ve written into Swarm via S3, NFS (check out our Tech Tuesday Using NFS with Object Storage webinar recording) or your custom applications. We don’t like silos when it comes to data, and wouldn’t dream of locking your data into them.
Sign up for our next Tech Tuesday webinar on January 22 at 11am PT/2pm ET when John Bell and I will take an in-depth dive into the Swarm Content Portal, provide a live demonstration and answer your questions.
The post Simplify Data Management with the Swarm Content Portal appeared first on Caringo, Inc..Related posts:
Last week, we listed the 9 Reasons to Use Object Storage in 2019. In this blog, I discuss some macro-market trends we have noticed in 2018 and predict how that will influence object storage use in 2019.1. On-demand will be in demand across many user-driven workflows
The rise of streaming services, cloud services, mobile devices and ubiquitous broadband are conditioning employees and customers to expect everything from their entertainment to their office documents to be online and accessible. From an IT perspective, there are a few layers of services that enable on-demand workflows, including storage—more specifically, movement of data to the right storage at the right time (aka tiering); search; tenant and identity management; and content delivery. In 2019, IT professionals will start to understand that the value of object storage isn’t just scale, it is the merging of many of the services needed to enable on-demand workflows into one solution that includes web-based and API accessible management.2. Hybrid cloud will come into focus
The cloud is here to stay. The investments the big 3 (Amazon, Microsoft and Google) are making in cloud are astounding. This is leading many enterprises to employ a cloud-first or hybrid-cloud policy in an attempt to simplify IT management, reduce data center costs and benefit from advancements in cloud services. However, not all workflows and datasets moved to the cloud will lead to these results. This year, you will start to see more understanding of what workflows and datasets to run in the cloud and which datasets to keep on prem. You will also see the big 3 push on-prem solutions (like Amazon Outpost).3. AI, ML, IoT, AR, VR…did I miss anything?
Look at just about any IT 2019 prediction list and you will undoubtedly see one of these acronyms. Behind all of these technologies are gobs of data being generated that need to continue to be accessed and searched. The resulting datasets consist of various file formats and file sizes with differing retention and accessibility requirements. The need here is similar to the end-user driven, on-demand needs discussed in prediction #1, but for automated and machine-based processes. The ability to consolidate distributed datasets under a single namespace and provide API-based, multi-protocol access will be key.Increasing Velocity and Simplifying Data Movement
Underlying these trends is an increase in velocity. Workflows and processes in general are becoming more efficient and are accessed in a distributed fashion via various devices, and the rate of change is increasing. These emerging trends inspired the Caringo FileFly solution, which gives you complete, automated data lifecycle management of all your unstructured data—from creation to preservation. FileFly simplifies migration of secondary data from traditional NAS to the cloud. And now, you can use it with Amazon AWS, Microsoft Azure and Google Cloud as well as with Caringo Swarm Hassle-Free, Limitless Object Storage.
Contact us to learn more about how FileFly can help you simplify your data movement, and to find out if you qualify for our free 25 TB FileFly Community Edition.
Two years ago, we outlined 7 Reasons You Need Object Storage in 2017 from the perspective of how object storage technology functions. In 2018, we examined eight market trends that support the case for using object storage. The functionality and market trends that we discussed in prior years are still relevant.
However, in this past year, we’ve continued to evolve our product line to keep up with the growing demand for affordable, S3-compatible storage solutions.Why Caringo Object Storage Solutions?
As we usher in the new year, I’d like to share the most compelling and timely reasons driving interest in Caringo’s object storage solutions:
- Efficiency. Our CEO Tony Barbagallo posed the question: Can Object Storage Really Replace Parallel File Systems? He spoke to HPC use cases and came up with a resounding “YES” answer, noting that this was absolutely possible with Caringo Swarm. As illustrated in recent testing at the Science and Technology Facilities Council (STFC) Rutherford Appleton Laboratory (RAL) Space supporting the JASMIN project, Swarm out-performed other leading object-based storage solutions, providing simple multi-protocol (S3/NFS) internal/external file sharing and automated tenant management. Visit the STFC Deploys Caringo Swarm in Its Super Data Cluster landing page for more information.
- Convenience. Our new Swarm Single Server is an on-prem, S3-accessible storage appliance with built-in content management. It provides all the hardware and software you need to keep archived content online, searchable and web-accessible—secure within your network. Start small with just one server then scale out to match the pace of your organization’s storage needs.
- Evolution of Storage Requirements for Video: In our always-on world, video is increasingly responsible for the data deluge—from production workflows to active archives to OTT streaming. With the rate of content creation, delivery, distribution and reuse increasing, there is more need than ever for cost-effective S3 storage. Watch last month’s 5 Tiers of Storage for New Video Production Workflows webinar to learn more.
- Ease of Management. Swarm provides transparency into your storage cluster as all management features can now be accessed through the same API. For a demo, watch the recording of our Tech Tuesday Webinar: Using the Swarm Object Storage User Interface.
- Freedom for Your IT Staff. Chained to a desk? Not with Caringo. Manage an entire Swarm object storage cluster from any laptop, tablet or mobile device with interactions performed through a series of contextual links. Everything you need to get a real-time status of your cluster can now be in the palm of your hand as you walk through your data center or, better yet, sit in a sports bar with a beer in hand. Learn more by reading this blog.
- Bring File Archive Into the Cloud Age. SwarmNFS 2.0 leverages a powerful patent-pending feature in Swarm that allows a client to only send the data of an object that has changed, dynamically reducing the bandwidth requirements and time for a client to update existing objects.
- Automated Data Lifecycle Management. Working with both NetApp and Windows filers, Caringo FileFly simplifies the migration of secondary data from NAS to Caringo Swarm, Amazon AWS, Microsoft Azure or Google Cloud. FileFly enables you to consolidate data on a single platform, easily add a remote disaster recovery (DR) site, backup files transparently and seamlessly and provision primary storage.
- Integration with Best-of-Breed Technologies & Resellers. Caringo partners with leading technology companies to ensure complete end-to-end scale-out storage solutions in Media & Entertainment, HPC, Enterprise IT, Medical and Cloud Technology. We work with the leading systems integrators, resellers, and distributors to develop complete solutions to your most pressing storage challenges. Learn more by reading our Solution Briefs and reviewing our list of partners.
- Professional Services and Support. As the pioneer in object-based storage, Caringo employs the most experienced engineers in the business. From architecting your proof of concept to keeping your organization up and running with our global 24x7x365 support team, our business is helping your business succeed.
Ready to Learn More?
Contact us with your questions or to schedule a consultation or demo.
In last week’s blog, we started our countdown of the 18 greatest hits of 2018 with a recap of our most popular webinars and blogs. This week, let’s take a look our most popular videos and papers that explore Caringo Swarm Hassle-Free, Limitless Object Storage and the other products in our holistic ecosystem for cost-effectively storing, protecting, managing and delivering data in our on-demand world.Videos on Object Storage
Speaking of our on-demand world, we continue to see a trend of audiences wanting information delivered via video. These 2 videos were our most viewed in 2018:
And, clearly, many of us still prefer to read some of our content. So, our final 4 spots of the 18 are devoted to our papers and briefs that we saw downloaded most often:
- Elastic Content Protection Technical Overview
- Storage Switzerland Optimizing Windows File Servers eBook
- 5 Crucial Problems With Big Unstructured Data Analytics and How to Fix Them
- Caringo SwarmNFS Product Brief
Thanks once again for counting down the hits with us. We are looking forward to kicking off the new year with more educational content on object storage, including our first Tech Tuesday webinar of the year, Using the Swarm Object Storage Content Portal UI, on January 22. John Bell, Sr. Consultant, and Brian Guetzlaff, Engineering Manager, will provide an in-depth look at our content portal user interface and be on hand to answer your questions about object storage.
Happy holidays from the entire Caringo team!
The post Countdown: 18 Object Storage Hits of 2018 (Part 2) appeared first on Caringo, Inc..Related posts:
As 2018 comes to a close, let’s take a look back at our object storage educational content that was most viewed this year.Top 8 Webinars
This year, we introduced the Tech Tuesday webinar series to provide in-depth education and an opportunity for our viewers to ask questions of our technical staff, the most experienced Object Storage engineers in the industry.Of those twelve webinars, the favorites were:
- Evaluating Object Storage Solutions
- Running Object Storage on a Single Server
- Migrating from SAN, NAS and Tape to Object Storage
- Storage Switzerland: NAS vs. Object: 10 Reasons Object Storage Will Win
- Power of Metadata in Object Storage
- Elasticsearch and Object Storage
- The Cloud & Object Storage Platform of the Future
- Swarm 10: Storage for the On-Demand, Distributed World
Top 5 Blogs
- Elasticsearch & Object Storage: PB-Scale Search Solved
- Scaling Your Data Storage for the Age of Zettabytes
- Top IT Professionals Choose These 3 Solutions to Transfer Large Files
Next week, we will continue our countdown of our most popular content. Have questions? Contact us to speak with an object storage expert or schedule a custom demo.
The post Countdown: 18 Object Storage Hits of 2018 (Part 1) appeared first on Caringo, Inc..Related posts:
I am sure everyone has heard the phrase “tools of the trade.” It refers to the tools needed to do a certain job. For those who work with video workflows, the tools of the trade are quickly evolving. This evolution is being driven by increasing file sizes (4k and 8K) and on-demand use cases (OTT, VOD, etc…). Both the rate of content creation and the rate of content delivery, distribution and reuse is increasing.
After talking to hundreds of professionals in creation and infrastructure at various events, it seems like most understand what needs to be done to increase the rate of content creation. They are adapting to these demands with faster storage and enhancements in existing editing platforms. However, many organizations are struggling with content delivery, distribution and reuse. Often, they are trying to enable new workflows with existing applications and storage infrastructure. This approach is often ineffective and frustrating as the tools of their trade are changing.
Unfortunately, when introducing new workflows and rising to meet the challenges of increasing file sizes and on-demand use cases, one size does not fit all. Your specific project requirements, in-house skill sets and budget will dictate what path is best for you. A complete solution will most likely mean combining multiple applications, multiple tiers of storage and working with multiple service providers. This doesn’t necessarily mean that management of the final solution will be complex. However, in today’s on-demand world you can no longer expect to take the “one-throat-to-choke” approach and still stay ahead of your competition. You need to give yourself the ability to combine best-of-breed solutions.
One of the best ways to do this is through a trusted advisor…often a value-added distributor, solutions provider or reseller. For video production workflows, one of the most experienced organizations we have partnered with is JB&A Distribution. As experts in video and content lifecycle solutions—combining best-of-breed infrastructure and regional reseller integration and support services—they are constantly keeping tabs on how tools of the video workflow trade are evolving.
Nicholas Smith, JB&A Director of Media Technology, will be joining me as the main presenter on our upcoming webinar, 5 Tiers of Storage for New Video Production Workflows. Nick and I will take a deep dive on the evolving requirements for video workflow storage, covering everything from super-fast NVMe for real-time 8K editing to on-prem cloud storage, cold archives and everything in between.
This year, we initiated a series of educational webinars for a highly technical audience—by engineers for engineers. The resulting Tech Tuesday series quickly became a cornerstone of the content on our BrightTALK channel, and something we look forward to every month.Take a look at the topics we’ve covered this year:
- How Does Object Storage Fit into Your IT Infrastructure?
- Hardware Selection Criteria for Object Storage
- Capacity Planning and Scaling for Object Storage
- Best Practices for Object Storage Installation and Management
- Solving Exabyte-Scale Search (and Beyond!)
- When to Choose Object Storage over NAS for Digital Video Workflows
- Object Storage for HPC
- Evaluating Object Storage Solutions
- Migrating Data from SAN, NAS or Tape to Object Storage
- Running Object Storage on a Single Server
- Using the Swarm Object Storage User Interface
This last TechTuesday webinar of 2018, we bring you Using NFS with Object Storage. Hosted by John Bell, Sr. Consultant, the webinar will feature Michael “Q” Brame, our Quality Engineering Lead. Protocol compatibility has long been a focus for us at Caringo, and Q, who has been with Caringo since 2006, has seen it all.
In 2016, Caringo launched SwarmNFS, the first lightweight file protocol converter to bring the benefits of scale-out object storage to NFSv4, seamlessly integrating files and object storage. Q will demonstrate how SwarmNFS can be used with Swarm, our hassle-free, limitless object storage platform. You can watch live and bring your questions or on demand after the event.
The post The 12 Months of Tech Tuesday Object Storage Webinars appeared first on Caringo, Inc..
When trying to understand a new technology, nothing beats a demo. In this month’s Tech Tuesday webinar, Sr. Consultant John Bell and UI Engineer Brian Guetzlaff gave a detailed demo of the Swarm Storage Management UI.
As shown in the image below, there are two high-level paths for Swarm Management UIs. The Content Management UI is used for managing the actual content stored, metadata, tenants, authorization and access. The Storage UI provides real-time status, historical metrics and the ability to customize a lot of the settings for the underlying storage infrastructure (hardware and software).Caringo Swarm UI Sitemap
We selected this topic for November because of recent enhancements that moved from a hard-coded Swarm UI to one based on RESTful APIs and HTML5. These changes are significant as they provide access to all management features from the same API. This simplifies integration into various tool sets and also means that Storage Admins can manage the entire cluster from any laptop, tablet or mobile device. All interactions are performed through a series of contextual links. Therefore, everything you need to get a real time status of your cluster—whether it is a few hundred Terabytes or a few hundred Petabytes—can now be in the palm of your hand as you walk through your data center.
On December 11, join us for our next Tech Tuesday webinar: Using NFS with Object Storage. In this webinar, John Bell and Michael “Q” Brame, our Quality Engineering Lead, will discuss using NFS with object storage and explain how this has been handled by traditional file-to-object gateways. The webinar will also include a detailed demo of Caringo SwarmNFS, which was recently benchmarked at 1.6 GB/s sustained streaming from NFS to object storage without the use of expensive cache (that is 3 PB+ per month per instance).
The post Object Storage Cluster Status in the Palm of Your Hand appeared first on Caringo, Inc..
Data is quickly gobbling up storage resources. The 2017 IDC whitepaper Data Age 2025 referenced in almost every recent article on data growth presents many startling statistics and predicts that:
…over 19ZB of storage capacity must ship across all media types from 2017 to 2025 to keep up with storage demands.David, Reinsel & John, Gantz & John, Rydning (April 2017). The Evolution of Data to Life-Critical. Retrieved from https://www.seagate.com/files/www-content/our-story/trends/files/Seagate-WP-DataAge2025-March-2017.pdf
To respond to this type of data growth, organizations of every size will need to adopt a holistic approach that incorporates multiple types of storage. I’ve often heard object storage referred to as a “niche technology.” However, the issues that object storage solves certainly aren’t “niche.”Swarm Object Storage Comes of Age
Earlier this year, we celebrated Caringo turning 13. We released our first object storage software in 2006 and with our 10–10 launch of Swarm 10, we now have what is arguably the most mature object storage solution in the market. (Read more in the blog Swarm 10 Object Storage for the On-Demand, Distributed World.) While we continue to be a cost-effective storage solution—often reducing overall storage TCO for customers up to 75%—we have expanded our scalability both up and down so that small and mid-size organizations can reap all the benefits of object storage that our larger customers have enjoyed for years. In addition, we now have performance benchmarking tests that delivered 35 GB/s for S3 read throughput.An Easy Button for On-Premises S3 Storage
For many years, object storage was only a viable solution for those with a few hundred TBs or more of data. However, many organizations with smaller capacity needs now realize that object storage can help them provide metered storage to employees or end users, enabling them to keep their data accessible over the web without utilizing a web or FTP server. This is why we added the Swarm Single Server to our product line up. Designed specifically to meet the needs of small- to medium-sized content-driven organizations, it provides a complete on-premises S3 solution with built-in content management and delivery in a single server with 96 TBs of raw capacity. With the Swarm Single Server, we essentially provide an “easy button” for organizations that need a path to expansion as they grow their digital assets. (Learn more about the Swarm Single Server.)Working Together for Better Solutions
One of our core missions is finding ways we can enable businesses to explore storage solutions that fit their particular use cases. Our efforts include:
- Expanding our technology partnerships and interoperability testing
- Adding more certified Caringo resellers who bring industry and region-specific knowledge
- Providing advanced tools like FileFly, SwarmNFS and Caringo Drive to ease management and movement of data
- Offering a free Swarm Developer’s Edition and a FileFly Community Edition to qualified applicants (apply on Caringo Connect)
- Presenting educational content on object storage (check out our webinars)
If you need help eliminating storage silos and ensuring your data is easily accessible and secure, contact us for a customized demo or to speak with one of our storage architects. For our current customers, the Caringo Support Team is available 7x24x365, helping make your holiday season stress and hassle free.
Last April, I had the pleasure of speaking at the Salishan Conference on High Speed Computing where I presented two interesting use cases for object storage in an HPC ecosystem. The first, and more traditional, as an economical active archive for simulation data, makes perfect sense. Once the computational analysis is over, why use expensive primary storage (think NAS and SAN) to house the raw data and results of simulations indefinitely?
It makes much more sense to store that data on cost-effective scale-out object storage that is fully searchable, easily accessible both inside the organization and over the internet (think URL to the object), and allows for NFS and S3 access to the same objects. In short, these are some of the many benefits of RESTful object APIs over traditional POSIX-based filesystems. Of course, the downside is that data must be moved/copied from object storage to file-based storage to run compute-intensive analytics on that data.
But what if you could remove the network and storage performance bottlenecks of the object storage environment? Could you then replace the primary storage (traditional POSIX and parallel file systems) with more economical object storage? That leads me to the second and quite frankly a more innovative use case—as a new storage, tenant and content management solution for read-intensive HPC workflows that could actually replace parallel file systems.
Those are the exact questions that the team at the Science and Technology Facilities Council (STFC) Rutherford Appleton Laboratory (RAL) Space supporting the JASMIN project set out to answer back in 2016.
First, they tackled the network latency. STFC employs an HPC “leaf/spine” routed CLOS network with 100Gb spine switches and 100Gb leaf or top of rack (TOR) switches. Every TOR is connected to every spine switch and there is equal uplink/downlink bandwidth on every TOR switch. This design delivers a super low-latency, non-blocking network where there are only 3 switch hops of <100nS between any network endpoint—orders of magnitude lower latency than an ordinary network.
Next up was to identify an object storage solution that could deliver those benefits mentioned above, while at the same time achieving the performance required to replace their parallel file system for read-intensive workloads…enter Caringo Swarm.
At its core, Swarm is built around a “pure” object storage architecture that is simple, symmetrical and does not rely on traditional storage technologies such as caching servers, file systems, RAID or databases. Instead, data is written raw to disk together with its metadata, meaning objects are “self-describing.” Identifying attributes such as the unique ID, object name and location on disk are published from the metadata into a patented and dynamic shared index in memory that handles lookups.
This design is quite “flat,” infinitely scalable and very low latency as there are zero IOPS to first byte. It also eliminates the need for an external metadata database both for storing metadata and as a mechanism for lookups. Automatic load balancing of requests using a patented algorithm allows for all nodes and all drives to be addressed in parallel, removing the need for external load balancers and front side caching—both of which can present significant performance challenges in an HPC environment where total aggregate throughput is the goal rather than single-stream performance.
For S3 testing, COSBench was used to run ramp-up tests leveraging up to 20 physical client machines to measure the throughput potential of the entire Swarm cluster. Sequential tests were run using 2 Gigabyte erasure-coded files. In this environment, Swarm achieved 35 GB/sec throughput, over 60% better than the minimum requirement. You can download the complete benchmark whitepaper here.
So…can object storage replace parallel file systems? In an HPC environment where high-aggregate read throughput, as well as durability and accessibility of data over a common protocol (such as S3 wrapped in a multi-tenancy framework), are required, the answer is a resounding YES with Caringo Swarm Hassle-Free, Limitless Object Storage!
If you grow up in Texas, you know that “Big D” means Dallas—the legendary city where the series of the same name and the movie Urban Cowboy were filmed. Famous for oil barons, cowboys and honky-tonk bars, Dallas is so much more. It is one of the fastest growing cities in the country and part of the burgeoning DFW (Dallas/Fort Worth) metropolitan area.
The DFW area is home to Fortune 500 companies, diverse industries (including information technology (IT), defense, financial services, telecommunications and transportation), private and public universities, and professional sports teams (Texas Rangers, Dallas Cowboys and Dallas Mavericks). Want more? Dallas is a mecca for shopping and fine dining as well as the home of the Texas State Fair and from November 12–15, it will be the home of SuperComputing ‘18 (SC18).
SC18 marks the 30th anniversary of the SuperComputing Conference Series. The infrastructure of high-performance computing (HPC) and its community have grown exponentially since this conference originated in 1988. At Caringo, we continue to see the trends our VP Marketing Adrian “AJ” Herrera wrote about in last year’s blog leading up to the SC Conference, including increasing data sets, visibility and end-user expectations.
Another Big D—Big Data
So, let’s talk about a different Big D—Big Data. File sizes and data sets continue to skyrocket, particularly for research institutions and laboratories as well as for those who deal with digital video files. Concerns for IT Pros and researchers are not relegated just to size, volume and protection, but also around how to manage and efficiently distribute data. Environments now need to support multi-protocol access (POSIX and RESTful) for users and applications while eliminating those pesky old data silos.
Seeing is Believing
More than just access, IT and Storage Admins need more than just visibility to the data. They need to be able to see who is accessing what and know precisely the resources those users are consuming.
The Times They Are A-Changin’
To use the iconic lyrics of the Bob Dylan tune, the times they are a-changin.’ In today’s on-demand distributed research environments, waiting minutes, hours or even days to access data is untenable. Additionally, not being able to easily distribute data and know what resources specific applications and end users are utilizing is no longer an option. This has led many organizations in the HPC space to come to the conclusion that they need to expand their storage infrastructure (beyond parallel file systems and tape) with object storage—whether it is by building a private cloud storage service, tiering data to a public cloud or using a hybrid cloud solution.
Swarm Hassle-Free, Limitless Object Storage has made it easy to do just that. It handles concurrent requests in parallel yielding the full throughput potential of all drives in the system, rivaling parallel file systems for read-intensive workflows. To learn more:
- Watch our Swarm 10 webinar on demand to hear about our latest updates to the Caringo product line.
- Visit us during SC18 expo hours at booth 4035 (contact us if you need a free expo pass or want to set up an appointment with an object storage expert).
- Check out our video demos and then sign up on Caringo Connect to request a free, full-featured 10TB Swarm Dev Edition.
If you are at SC18, don’t forget to join us for happy hour Tuesday and Wednesday at 2 pm in booth 4035. Co-sponsored by our partner Boston Limited, you can quench your thirst for object storage knowledge and beer with a “Dallas Blonde” (American Ale) from Deep Ellum Brewing Company.
I was on an analyst briefing call a few weeks ago and something the storage analyst said really stood out. I am paraphrasing a bit, but the comment was “…the conversation is always NAS OR Object Storage and it really should be NAS AND Object.” The analyst who said it has a solid handle on the differences between the two types of storage. That said, to those who aren’t students of storage nuances, understanding the right storage tier to use at the right time is a daunting task. Especially when storage vendors claim they do it all. And, as anybody who has ever tried to drill a hole, crimp coax cable, or cut a piece of wood knows, any task is a lot easier with the right tool. So, how do you find the right tool for your needs? Well—you first have to understand what you are trying to achieve and architect and test a solution, or you reach out to a trusted advisor.
Understanding what you are trying to achieve can be the most difficult part. Not because you don’t know your workflow, but because of how vendors (us included) communicate features, functions and benefits. Often, features are put on a matrix with check boxes, and if it is mentioned on a vendor’s site, the box is “checked.” S3 support (check), geographic replication (check), NFS support (check), versioning (check) and so on and so forth. Pretty straight forward, right? Well, not really. As we move to a software-defined data center and POSIX plus RESTful workflows, variability is everywhere. For instance, S3 is now a defacto standard, not an actual standard that is verified by a governing organization. To that point, all storage vendors and application providers support S3 differently. This means that features, performance and general functionality vary across storage types and applications. From the vendor and application perspective S3 may be supported, but it may not fit your specific workflow requirements. So what do you do?
Some organizations have the necessary skill sets in house to perform the proper analysis and architect and implement the right solutions. At Caringo, we will help wherever we can if this is the path you want to take. In fact, we just announced an appliance, Swarm Single Server, to make implementing Object Storage easier. The Swarm Single Server takes hardware and reference architecture questions off the table, However, application interoperability and complete workflow integration for your environment still needs to be certified. If you don’t have all the necessary skill sets and components in house, this is where a trusted advisor comes in.
One of the best examples of this I have seen is what our partner, Melrose Tec, did at their Open House last week. They combined three different storage solutions, hooked in the necessary content ingestion and editing workstations, networked it all via a Mellanox 100 Gbit switch, and had all servers and software user interfaces on display in their lab. In doing so, they saved their clients a tremendous amount of time, validated that all technologies played nice together and that the necessary performance was achieved. They combined NVMe storage (Excelero), a GPFS file system layer (Pixit Pixstor) and active archive storage (Caringo Swarm) and demonstrated a complete workflow. They did this by having one of their customer’s colorists come in and work on 8K footage on Resolve in real-time. Then, they demonstrated tiering this footage to archival storage and instantly calling it back.
As always, we at Caringo are here to help determine if Swarm fulfills your workflow requirements, but as Melrose Tec’s demo shows, we are sometimes just a piece of a broader solution. We have a number of partners that can help and we recommend browsing our resource section, especially our webinars, for up-to-date information on current trends and workflows. And, of course, you can always contact us and speak with one of our storage experts.
Returning from the NAB Show in New York City, I was thinking of how many times I’ve made the remark that “one size does not fit all.” While usually this is associated with fashion, it is also true of data storage.
Massive scalability is one thing, but what if you need to start small? This is why Caringo recently launched the new Swarm Single Server—a complete on-prem S3 solution with built-in content management and delivery in a single server with 96 TBs of raw capacity.
After speaking with the diverse attendees at the NAB expo about our latest addition to the Caringo product line, I could see how excited they were at the reality that there was now a simple way to start using object storage. Many are struggling as they outgrow traditional SAN, NAS and Tape storage solutions. Just as important, they are tired of searching for files on disparate storage and having it inconvenience them and interrupt their workflow and their ability to deliver content to customers.
When we talk about the benefits of object storage, what does that mean on a smaller scale?
- Built-in content management, search and delivery
- Archives are online and secure within your network
- Ongoing costs and the risks of cloud storage are reduced
For any small- to medium-sized business that needs to keep their data online and accessible, this storage model is attractive. We all know that the “pay-as-you-grow” model makes sense. Join Senior Consultant John Bell and Engineer Jamshid Afshar next week for our Tech Tuesday Webinar: Running Object Storage on a Single Server as they explain how you can store, manage, search and deliver data with just one server, while maintaining the ability to scale out by simply plugging in additional servers as your data storage needs grow.
To learn more about object storage, check out our entire 2018 Tech Tuesday webinar series, now available on demand.
Today is a landmark day for Caringo, as we launch updates to our entire product suite. Our Swarm 10 Platform is the culmination of over a decade of market hardening and continuous innovation to satisfy customers who are driven by on-demand, distributed workflows. It leverages Swarm’s unique pure-object approach to deliver unrivaled performance at petabyte scale. Before we get into the details of today’s launch, let’s talk about what I mean by “pure-object.”
What is Swarm’s Pure-Object Architecture?
Caringo Swarm has always used a purpose-built storage design that efficiently utilizes standard disks without expensive caching layers, RAID controllers or layers of software. Swarm object storage software is a parallel architecture that boots from bare metal and runs completely in RAM. Swarm is native object storage, meaning each object’s metadata is encapsulated with that object’s data as a single entity. This means that no separate SQL databases or filesystems are required to track objects. With Swarm, objects are completely portable.
This approach extracts every bit of value from standard drives, servers and network infrastructures. It is at the core of our design and our key to delivering industry-leading S3 throughput with sustained PB-scale NFS read/write. Caringo SwarmNFS delivers a high-performance translation from file to object that uses patent-pending features in Swarm which eliminate the use of a spooler in the middle. Spoolers are expensive, put your data at risk, cause performance degradation, and get crushed under sustained ingest work loads.
Why the Updates?
Many content-driven organizations prefer to store data on premises to reduce security risks and copyright infringements. They need to cost-effectively scale to petabytes with distributed ingest and they need data to reside close to access points and applications. We are giving large organizations even more of what they already love about Caringo—performance, streamlined management and granular data insight.
At the same time, we are launching a new single-server appliance for smaller organizations giving them a much lower cost of entry to the most-scalable, on-prem, object-storage solution. This appliance will be valuable for a number of use cases, particularly for Media & Entertainment (M&E) organizations such as post-production houses, studios, broadcasters.
Highlights from our latest release include:
- Swarm 10 object storage has been optimized for dense, distributed environments, including an update to Elasticsearch 5. In a recent deployment, Swarm delivered an astounding 35 GB/s read and 12.5 GB/s write aggregate S3 throughput, the object storage industry’s fastest performance.
- New Swarm Single Server reduces entry-level hardware requirements by 75%. Swarm Single Server is a fully self-contained appliance that provides all the features of Swarm with 96 TB of raw storage that can be racked or fit under a desk.
- SwarmNFS 2.1 delivers parallel, petabyte-scale sustained streaming of NFS to object. In recent tests on standard hardware, a single instance of SwarmNFS sustained reads of 1.6 GB/s (3PB+ per month) with no caching or spooling. SwarmNFS also leverages Swarm’s parallel architecture so that multiple instances can be deployed as needed to further improve throughput.
- FileFly 3.0 now supports AWS, Azure and Google Cloud, providing file tiering from Windows and NetApp. In addition, a new, full-featured FileFly Community Edition is available that includes 25 TB of useable data transfer to any target.
Join me at 11am PT/1pm ET tomorrow for our webinar: Swarm 10—Storage for the On-Demand, Distributed World. Our VP of Marketing Adrian “AJ” Herrera and I will talk about what is new in Swarm 10 and detail how we have evolved our product line to set a new standard for on-premises object storage.
The post Swarm 10 Object Storage for the On-Demand, Distributed World appeared first on Caringo.
Since 2005, Caringo has been strategically looking to the future and altering the storage paradigm with groundbreaking technology. A few things at Caringo have remained the same: the spirit of entrepreneurship, our focus on technical innovations and meeting customer needs, as well as an inclusive corporate culture that thrives on coffee (available in mass quantities at HQ in Austin) and collaboration (often over beer, which is a constant in the HQ fridge).
However, in other ways, Caringo has changed dramatically. The foremost example of this is the evolution of our product line. From pioneering pure object-based storage software to a diverse and well-rounded product line that includes our award-winning FileFly Secondary Storage Platform (read the review); SwarmNFS—the first lightweight file protocol converter to bring the benefits of scale-out object storage to NFSv4; Caringo Drive; private, public and hybrid cloud capabilities; and complete hardware/software solutions, Caringo has travelled a long road to get to this junction.
At Caringo, we have expanded not just the breadth and functionality of our solutions, but the customers and industries that we empower all over the globe. Currently, we serve hundreds of organizations in Media & Entertainment, MSPs, educational and government organizations, medical facilities and research laboratories. On 10-10, Caringo will be unleashing the power of Swarm 10—Storage for the On-Demand, Distributed World.
Please consider this your personal invitation to join us on October 11 at 11am PT/1pm ET for a webinar with Caringo CEO Tony Barbagallo and VP Marketing Adrian “AJ” Herrera as they talk about what is new in Swarm 10, a landmark release that enhances every part of the Caringo product suite with unrivaled performance and cost-savings enabled by our unique pure-object approach. Learn how Caringo has set a new precedent in on-premises object storage with blazingly fast S3 throughput and sustained petabyte-scale NFS to object read and write—all on standard hard drives, server and networking infrastructure.