One of the most popular uses of Swarm is for storing, archiving and delivering
In a recent blog, I detailed the market conditions that are driving customer requirements. The blog was about sports video; however, we are seeing the same trends in every market that relies on digital video (post production, film, surveillance, houses of worship and corporate training to name a few).
The biggest challenge for many professionals today is enabling “on-demand” in existing workflows. Said another way, they are struggling with providing instant access to digital video and delivering it immediately to any device. At the heart of enabling “on-demand” is efficient file movement. This blog details a few of the specific features and interfaces Swarm employs to enable efficient file movement. The first step in efficient file movement is being able to integrate with Swarm via your existing applications and workflows.
- Sustained Data Streaming
- S3 Support
- S3 Support Importance
- Parallel Uploads
- Range Reads
- Partial File Restore
- Wrap Up
Unlike many gateway and file system interfaces on the market, SwarmNFS is a file-to-object converter. It provides a mountable volume for your NFS or SMB applications and converts files to objects in a lightweight fashion (in flight) without spooling or caching. Therefore, you can use NFS, SMB or even S3 to read, write, modify or access a file, enabling true multi-protocol access. In a recent benchmark, a single instance of SwarmNFS delivered 1.56 GB/s read performance on commodity hardware.Continuous S3 Support
The Amazon S3 API has become the de facto object storage interface. I stress “de facto” because it is technically not a standard. That said, we spend a lot of time making sure we stay as true to the specification as possible.Why is S3 Support Important?
S3 support is important because beyond standard storage protocols like NFS and SMB, the S3 interface is the way that most ISVs, data movers, and asset managers integrate with the cloud and on-premises object storage. Caringo’s object-based software-defined storage solution Swarm supports the Amazon S3 API through an extensible architecture, which later can be used to seamlessly support additional third-party APIs. A broad range of applications that currently support the Amazon S3 API work directly with Swarm. If you are interested in learning more about how Swarm plugs into asset management solutions via the S3 API, you should attend our webinar How to Enable Video On-Demand in Workflows on May 30 or watch afterwards on demand. Caringo will be demoing all of the asset manager integrations we highlighted at this year’s NAB 2019 Show, including CatDV, Marquis Project Parking, Cantemo and Vidispine. Or Caringo’s webinar on May 28 What Your Storage Vendor Isn’t Telling You About S3, where Caringo industry experts will discuss the secrets and important details your storage vendor isn’t telling you.Parallel Uploads
From an architecture perspective, Swarm employs a parallel approach—that is, all nodes can perform all operations. This makes multi-part or “parallel” uploads an efficient way to ingest files and it also streamlines combining the multiple parts of a file once on Swarm.Range Reads
The file movement benefits of Swarm aren’t just on ingest, but also provides efficient ways to access data, as the native interface to the software is based on HTTP. Swarm object-based storage device and software enables range reads, offering an application like a video player; the ability to specify the exact location of a file to start a playback operation. This eliminates the need to download or cache the undesired portion of the file.Partial File Restore
Partial File Restore in Swarm (currently in beta with general release scheduled for Fall 2019) is a well-known feature in the M&E world that Caringo is bringing to object storage. Partial File Restore for object storage enables the ability (via a web-based UI or the API) to specify a portion of a video file you want and then create a clip of only that portion. That clip can then be moved to a specific application, downloaded by authorized users, or streamed directly from Swarm to authorized users, employees, subscribers or viewers. Get a personalized preview of Partial File Restore before Caringo’s launch later this year.Wrap Things Up
This is just a short list of interfaces and features that enable digital video professionals to leverage the benefits of Swarm software-defined storage for efficient file movement into, within and out of Swarm while plugging into existing workflows. In addition to the resources I listed above, we have a growing library of on-demand webinars and highly informative blogs. With the Swarm 11 release just around the corner, our field-hardened object storage contains far more features than the few highlighted here. If you are interested in a full overview, don’t hesitate to reach out to Caringo with questions about your specific use case or to schedule a private demo!
In 2019, Caringo has continued to lead the way in the object storage industry by providing insight into the inner workings of our technology and industry trends, along with experience gleaned from 100s of successful object storage implementations. We know you have a lot of choices to make about how you store data, and that your organization most likely needs more than one type (or tier) of data storage technology. Following is a collection of resources to help you make informed choices when the time comes to integrate a cost-effective tier of storage for access, distribution and archive.On-Prem or Cloud, S3 Storage Rules in 2019
Whether you are looking for an on-premises solution or an effective way to tier data to a public cloud like AWS, Microsoft Azure or Google, you’ve most likely come to the conclusion that you need all of your storage to be S3 compatible.
If you want to learn more about S3 API support in object-based data storage, register for our May 28 Tech Tuesday webinar: What your Storage Vendor Isn’t Telling You About S3. You will have the opportunity to ask questions during the webinar of John Bell, Senior Consultant, and Eric Dey, Director of Product. (Alternatively, you can watch the webinar recording on demand after the live event).
Managing and Moving Data Between Storage Platforms
Managing your data and moving it between various storage platforms can at times be problematic. This is particularly the case when you use storage products that are designed to keep you continuously purchasing expensive hardware to stay ahead of your data growth.
To help our customers combat this and enable them to dramatically cut their storage TCO, we’ve evolved our tools to enable you to manage data and move it from SAN, NAS and tape into object or cloud storage. Check out the Tech Tuesday Using FileFly to Manage Your Data with Azure, Google, Amazon or Swarm webinar on demand to learn more.
The Magic Behind Metadata
Metadata (the “data about data”) has always been a passion for us at Caringo, and we’ve made unique strategic choices about how we manage metadata to ensure our users can unlock the intelligence potential that resides in large data repositories. Ryan Meek, Principal Solution Architect, is our metadata expert, and he talks to John Bell about it in the Tech Tuesday webinar Using Metadata with Object Storage.
Storage for Video Workflows
An arena where we have recently seen tremendous growth of data is in Media & Entertainment. Whether music, gaming or movies, our society today relies on technology not just for work but for play. Our engineering team is constantly working with our technology partners and customers to enable workflow solutions, particularly for video. Ryan Meek and Sales Engineer Jose Juan Gonzalez Marcos discussed this topic and gave a demo of how our object storage works with Media Asset Management (MAM) products in the How Storage Streamlines Workflows in the VOD/OTT Era webinar.
Jose will be back on May 30 with VP of Marketing Adrian “AJ” Herrera for a webinar where he will provide a demonstration of how to enable on-demand workflows. Register now to watch live or on demand.
What Do You Want to Learn About?
What topics would you like to see us cover in 2019 and beyond in our webinars and blogs? Email us at email@example.com with your requests and/or questions. We are always happy to provide information you need to choose the right storage solution for your business or organization.
The challenges of enabling “on-demand” workflows are being felt across every industry driven by digital video. However, those who have large content archives or are struggling with supporting live events are facing particularly challenging issues. Sports video professionals need to deal with both. In this blog, I will give a high-level overview of how object storage enables “on demand” for sports video workflows. First, let’s level set on the definition of “on demand” and the resulting requirements.What Does “On Demand” Mean and What Does It Require?
On demand is the enabling of delivering content at the end-user’s convenience. Depending on where you sit in the sports video lifecycle, your end user is different. If you are on the production side, your end user may be VFX or colorists, or possibly your client requesting a new project that reuses clips from previous games or episodes. If you are in broadcast, maybe the end user is a regional station or a subscriber. Or, if you are a sports team, maybe your end users are producers, executives, coaching staff, trainers or athletes. What delivering content at their convenience boils down to is that (1) you can find the file and (2) you can stream or deliver it to the required application or device when they request it.How Does Object Storage Enable On Demand for Sports Video?
When evaluating data storage solutions, it comes down to your budget and requirements. It’s reminiscent of what Billy Bean did with the Oakland Athletics in 2002 by using sabermetrics. The A’s had a $44M budget, the third lowest budget in Major League Baseball (MLB) at the time, with the Yankees’ $125M budget being the highest. What Mr. Bean and his staff realized was that the traditional subjective form of recruiting often fell short and if he focused on what led to scoring, on-base and slugging percentages, he could pick up undervalued players that statistically had a chance against teams at the top of the budget scale. This led to their famous 20-game winning streak.
If your focus is enabling on-demand access or providing economical, anytime access to content, object storage is your best storage option. Object-based storage maximizes the efficiency of your budget by leveraging commodity hardware (similar to how Mr Bean maximized the efficiency of his budget via undervalued players), delivering cost-effective, scalable storage that includes self-healing, rapid recovery, automated management, built-in replication (and other features) with instant content access. Best-of-breed solutions from object-based data storage vendors like Caringo have search, parallel uploads and direct streaming as standard features. This is why object storage is the enabling technology behind every cloud storage service and why there is an object store at the heart of every major video-on-demand service today (Netflix, Hulu, Amazon Prime and others).Object Storage in Sports Video Workflows
So exactly where does object storage fit into Sports Video workflows? Below are a few diagrams that show where object storage would fit into workflows for private streaming and longtail video on demand (VOD), tape storage replacement and centralized backup. The sections in purple with the Caringo Swarm logo indicate where the object storage solution is deployed.
As with any data storage technology, being able to use object-based software-defined storage boils down to the protocol or interface. Historically, object storage was accessed through a proprietary RESTful interface. To interface with object-based storage, an application developer had to integrate to the vendor’s API. In layman’s terms, this meant object storage didn’t just work out of the box with applications like file-system-based solutions that relied on SMB/CIFS (Windows) or NFS (Linux). This led all object-based storage vendors to create interfaces for SMB/CIFS and NFS.
However, the tipping point for object storage was the proliferation and support of the Amazon S3 API. Now, just about every current application used by Sports Video professionals either supports the Amazon S3 API already or will in the next year or so. Support of the Amazon S3 protocol means you can use both the Amazon S3 cloud service or any on-premises object storage solution that supports S3.Where Can I Learn More About the S3 API in Object Storage?
If you are interested in learning more about S3 API support in object-based data storage, register for our May 28 Tech Tuesday webinar: What your storage vendor isn’t telling you about S3. (if you are reading this after May 2019, watch the webinar recording on demand).How Do I Know If My Organization is Ready for Object Storage?
- It is taking too long to access video and project files from your archive
- You need to stream or share internal video but don’t want to use a CDN and the files are too large to email or FTP
- You need to support workflows that need access to archived content via S3, NFS and SMB
- Cloud storage and NAS are too expensive and tape storage recall times are too long and difficult to manage
Research aside, if you identify with any of the above statements then your organization is probably ready for object storage.How Can Caringo Data Storage Help Sports Video Professionals?
With all that said, object storage isn’t a panacea, but it is an increasingly important storage technology that enables on-demand access. We have some great resources that can help you understand the differences between data storage tiers, block storage vs file storage vs object storage, and how to migrate from tape storage to object storage.
As we’ve been introduced to sports teams, broadcasters, entertainment venues and universities, we started hearing a lot of common themes in their challenges. Here at Caringo, we started our search for an organization where we can learn more and where our expertise can be leveraged to solve these challenges. This led us to the Sports Video Group (SVG), and we are proud to be one of their newest members. SVG is a group that was created to advance the creation, production and distribution of sports content. If you will be at the upcoming Sports Content Management Forum in NYC on July 24 and would like to meet, let us know! Or, you can schedule a consultation with us at any time.
The post Object Storage: Enabling On-Demand Workflows for Sports Video appeared first on Caringo.Related posts:
Everywhere you look these days, there are articles about new scientific breakthroughs. The storage world is abuzz about the the first real picture of a supermassive black hole. This black hole is at the center of galaxy M87, and it took about 3.5 PBs of data to generate the picture. In total they collected 5PBs, that is 5,000TBs or about 625x8TB drives. Amazingly enough, hard drives with the data were shipped via airplane from different locations to be consolidated!
Why in the world would you ship data around on drives instead of using the cloud or FTP? The problem was not storage capacity (5PBs is easy enough to store in AWS). The issue was the transferring of this amount of data in a reasonable time frame.A World Full of Data
Scientific data is often collected from an eclectic mix of sources, and can easily fall victim to the age-old curse of storage silos.
Consider just a few of the various sources for data:
- Historical records on various types of storage (from handwritten notes to archival tape to various storage platforms)
- IoT devices, telemetry units, telescopes, etc.
- Surveys & Interviews
- Observation (by researchers or by video)
As Science progresses, research organizations around the world strive to arm their researchers with the technology to continue making advancements, and data storage is an important tool in the world of high-performance computing (HPC). However, similar to the point made in AJ Herrera’s recent blog What are the 5 tiers of Storage for New Video Production Workflows, one tier of storage does not fit all.
In a research setting, a well-designed storage infrastructure integrates various tiers (or types) of storage to enable the collection, storage and analysis of scientific data. However, recent advances in globally distributed workflows and the resulting access requirements are driving a paradigm shift from distributed and parallel file systems to object storage.Can Object-Based Data Storage Replace Parallel File Systems?
“Yes! For read intensive workloads” concluded CEO Tony Barbagallo when he posed the “Can Object Storage Really Replace Parallel File Systems?” question in our blog. To say this another way, object storage (on the appropriate underlying infrastructure) can enable high-throughput managed access to research streaming distributed access to data and reducing time to discovery. An example of this is how the UK’s Science and Technology Facilities Council (STFC) Rutherford Appleton Laboratory (RAL) Space uses Caringo Swarm Object Storage as part of their JASMIN super data cluster. Prior to selecting Caringo Swarm, STFC performed extensive benchmark testing on a number of Object Storage Solutions to determine which best met the requirements for the project.
It is going to take more than just massive amounts of data storage for the scientific community to streamline distributed collaboration. It will take a coordinated approach between storage, networking and data analysis tools, such as those provided by our partner Globus. Globus is a secure, reliable research data management service used by thousands of organizations to move, share and discover data via a single web browser interface.
Learn more about Globus and how it works with Caringo Object Storage to solve issues by combining the benefits of S3-enabled private cloud storage with secure, reliable research data management services by reading our solution brief.
The post Scientific Breakthroughs and the Role of Data Storage appeared first on Caringo.Related posts:
As your digital video workflows grow in scope, your underlying storage strategy must adapt. The days of buying one tier of storage for editing and one tier for archive are quickly coming to an end due to rapidly evolving asset reuse, globally distributed workflows and on-demand delivery requirements.
- On-Going Challenges
- Storage Requirements
- 5 Tiers of Storage
- Mapping to Requirements
- Mapping to Workflows
So, how do you define your storage strategy to create better, more cost-effective workflows? What are the characteristics of each tier of storage needed? And, what are the variables to consider when defining how much of a specific storage tier you need?What are Some Challenges of Video Content?
1080P workflows are quickly turning into 4K workflows with 8K workflows starting to take hold and 16K cameras looming on the horizon. Therefore, both the size of the image (the resolution) and the size of the resulting files are growing. In addition, uncompressed workflows are now being requested, resulting in multi-TB sized files.Aspects of Creation & Consumption
The rise of video on-demand (VOD) services, broadband and mobile devices have changed the way content is produced and consumed in several ways, including:
- Consumers now view content when they have time, on the device that’s most convenient to them
- Resolutions range from mobile (720, 1080, 2K) to 4K with 8K TVs hitting the market
- You now have to create various versions, dimensions, resolutions for the exact same piece of content
With video being repurposed and reused, nothing is thrown away, and the need to protect content and keep it accessible is critical. Here are some of the factors that intensify the need for protection and access:
- Video content has intrinsic value well beyond the initial use
- All project files, source footage and produced content need to remain instantly accessible
- On-demand workflows are straining workflows based on tape, it takes too long to recall assets
- Speed: What is the speed required to perform your particular task? If you are editing 8K footage in an uncompressed format, you are going to need really fast storage. If you need playout functionality playing back compressed formats over the web, you don’t need as much speed. The goal is to match the speed to the task.
- Length of File Use: How long are files going to be used for their immediate purpose? How long do they have to stay on a particular tier of storage?
- Repurposing: When is a specific file going to be reused? Is it within 6 months?
- Archiving: If you know if and when a file will be repurposed, you can develop your archive strategy. For instance, if you are not going to use a file within 6 months, perhaps it’s time to move that file to a secondary or tertiary tier of storage.
Once we look at our workflows and file use characteristics through the lens of these requirements, we can start to determine what capacities we need for each specific tier of storage to complete our task. Now let’s take a look at the 5 tiers of storage available for new video workflows.What are the 5 Tiers of Video Data Storage?
Ultra fast NVMe is similar to RAM. This is the type of storage needed for 8K or 16K uncompressed workflows since it is faster than SSDs. You can get content off the drive quickly due to low latency and do it with less drives than used in a traditional RAID array. However, networking requirements are massive, along with the resulting compute and overall price tag. This isn’t a cheap solution which is why you buy the amount needed for your specific task.Ultra-Fast SAN/NAS Architecture
This is your more traditional SAN and NAS environment needed for high-speed editing platforms. They are fast and reliable and optimized for video workflows with large capacity drives and arrays with a multitude of connectivity options (10 Gig, 40 Gig, Fibre…). However, networking upgrades may be required. There are high costs of ownership due to power/maintenance, and performance degrades as they near capacity. These factors are behind the need to eventually offload files to different tiers of storage.NAS & Filer
If your content has a shelf life of 3–6 months then you may want to consider putting it on a NAS or filer. They are relatively low cost (vs NVMe and Ultrafast NAS/SAN) with Petabyte-level capacity. However, they have limited scalability beyond multiple Petabytes, are often not accessible over the web, and have long rebuild times in the case of a sector or drive failure. In addition, support and maintenance fees in subsequent years can often exceed the original purchase price.Local Archival
For on-prem and local archive, you have two options. LTO (tape) or Object Storage (HDD based). The benefits of LTO are low cost per TB with virtually unlimited tape capacities. However, you are locked into a tape format, maintenance is often an issue, and if the files are stored across tape drives it can take a long time to recall assets. The benefits of object storage are also the low cost per TB with a cloud-like low maintenance experience and dense capacities. However, object-based storage requires available data center footprint and (like LTO) an HSM or data mover to plug into certain file-system-based workflows. For most organizations, dealing with new video production workflows and on-demand requirements, object storage is the tier of storage with the most rapid rate of growth.Cloud Archival
The cloud also employs both LTO and object storage technologies to enable archive services. The benefits include zero maintenance with unlimited capacity and instant deployment. The benefits, however, come at a cost that compounds over time and is always more expensive than on-prem solutions if you are keeping assets for more than 2–3 years.Which Storage Tier is Best for My Requirements?
Depending on your requirements, you can match the appropriate type of storage. This is illustrated in the chart above. The circles in the chart above represent the amount of time that files are going to sit on each tier of storage, based upon the requirement. For instance, if speed is the primary requirement then files will sit on NVMe or Ultra-Fast NAS/SAN. However, if archive is the primary requirement, the file will most likely be on your local archive or cloud archive tier. This table is another tool you can use to help you map your current workflows to the associated tier of storage.Which Storage Workflow is Best for Video Production?
Similar to the previous matrix, this table shows what storage tier is ideal for a specific workflow. For instance, for production you would most often use NVMe or Fast NAS/SAN. For video-on-demand (VOD) or over-the-top (OTT) enablement, you would most often use a local archive or the cloud. You can also use NAS for VOD and OTT; however, you would need to add additional layers of infrastructure like a web server. For archiving projects or masters, a local archive or the cloud are the clear choices.Learn How Storage Can Streamline VOD & OTT Workflows
Watch this on-demand webinar to learn how storage streamlines workflows in the VOD/OTT era. You will see how you can stream content directly from storage, how to set up multi-site content distribution for collaboration, and how to unify file systems like NFS and RESTful S3 workloads. In addition, you will gain an understanding of the pros and cons of using object storage vs tape storage.Conclusion
We hope that the information covered in this blog and the different tools presented help you define your storage strategy. As always, we and our partners at JB&A Distribution are here to help. If you have questions or would like to schedule a custom demo, contact us today.
The number of tiers of storage you deploy and their resulting capacity is all going to come down to your budget and the requirements determined by a needs analysis. If you are struggling with editing 8K or 16K uncompressed files, then you should take a look at NVMe. On the other hand, if you are struggling with VOD/OTT enablement or keeping archives instantly accessible, you should take a look at object storage or the cloud.
The post What are the 5 Tiers of Storage for New Video Production Workflows? appeared first on Caringo.Related posts:
Quick. Fast. Performant. We often hear requirements for high performance when talking about storage. When we drill deeper, that translates to “fast, we need it to be fast.” This response is common and troublesome.How Fast is Object Storage?
Fast is relative. Compared to a semi-truck, a Ferrari is fast. But, each serves a different purpose. When it comes to storage, measuring “fast” and what that means in the real world can be a complicated endeavor. There are two metrics measured, input output operations (IOPS) and throughput.IOPS vs Throughput
IOPS measure the speed of operations, which is a useful measurement of performance for file-system-based solutions since files are essentially shredded into thousands of pieces and need to be stitched together quickly on read. Throughput, however, is more about the total amount of data that can be read from a storage system.
IOPS is a great measurement to determine if you can support a specific application—like a 4K or 8K editing suite (as one example). Alternatively, throughput is often used in the context of how quickly you can deliver content to different applications, clients and users.
For object storage, throughput is the main performance characteristic to measure with IOPS being a secondary measurement.Throughput of Object Storage
When measuring the performance of an object storage system, not only must raw throughput be taken into account for both data ingest and retrieval, but data protection and recovery speeds need to be accounted for as well. What use is a massive ingest rate if the data is not protected? Being able to claim that you can lose data faster than the competition is not smart marketing. And, not being able to keep up with a client workload is problematic.Why Conduct Performance Testing of Object Storage?
While being able to quote large throughput numbers is impressive, performance testing storage systems helps us out in many other ways as well. It helps us fine tune the software and underlying mechanisms. It shows us how the software scales as environments grow in capacity and complexity. Most of all, performance testing lets us know if the storage solution will meet the needs of the customer workload, as it did in our recent object storage performance benchmarking for the UK Science and Technology Facilities Council’s (STFC) JASMIN super data cluster. Download the STFC Object Storage Benchmarking Case Study & Whitepaper.Lies, Damned Lies, and Statistics
It has been said, “there are three kinds of lies: lies, damned lies, and statistics.” One thing to watch for when reviewing performance statistics is how the testing was performed. For example, if a system makes impressive numbers and performs fantastically with 100 byte files but performance falls off dramatically for files over 1kB then the usefulness of that system would be very limited. Or was the testing performed in simulation only vs using real world tools and data? What does the test data actually measure, writing to cache or the final write to the target media (usually HDD for object storage)? Useless metrics are just that⏤useless.Learn More
To learn more, please tune in to our next Tech Tuesday Webinar: Measuring Performance of Object Storage. Ryan Meek, Principal Solutions Architect, and I will give a high-level overview of use cases for object storage, share best practices for testing, and explain how Caringo tackles performance testing on April 23 at 11am PT/2pm ET.
As attendees of the 2019 NAB Show in Las Vegas head home this afternoon, we wanted to share our reflections on the event with you. No other event floor is as vibrant and exciting as the Las Vegas NAB Show. With Creative and IT Professionals from verticals spanning Media & Entertainment to Government to Houses of Worship and everything in between, our team heard a solid theme emerging this year. The need for object storage is on the rise and the audience now understands the value of object storage.You Get Us (Object Storage Resonates)
Many attendees stopped and looked thoughtfully at our booth message: Caringo Object Storage for Access, Distribution & Archive. It was crystal clear this year, the attendees “got us.” After reading the signage, the attendees quickly jumped to asking probing questions about our technology.
Many of those questions were fielded by our knowledgeable Sales & Marketing team members (Ben Canter, Adrian “AJ” Herrera, Paul Phillips, David Fabrizio, Jerry Tohtz and me). However, those questions often led to in-depth conversations (in English and Spanish) and demos with our Principal Solutions Architect Ryan Meek and our Sales Engineer Jose Juan Marcos Gonzalez, with CEO Tony Barbagallo stepping up to the plate as needed.What is Object Storage, and How Can it Help My Organization?
Our longevity in the object storage market (Swarm object storage technology is now field-hardened at version 10.2) provides us with the edge of having the most experienced object storage engineers in the industry. In our mission to make certain you fully understand object storage, we give you access to our technical staff both at in-person events and in our Tech Tuesday webinar series.
Caringo has a highly distributed global team with headquarters in Austin, Texas. For this crew, we pulled from three countries, two continents and four states. Seamlessly, we moved from being a virtual team to a physical team. Like a well-choreographed dance, we welcomed visitors to our booth and introduced them to the staff member who could best provide the information needed. In that process, we all got to know and like each other even more. We even rang in a landmark birthday for David Fabrizio, who has been at NAB for each of his birthdays over the past two decades!Meeting Friends—Old and New
At the NAB Show, we make new friends and reconnect with old friends in the industry. There are few places better than Las Vegas to have a good time, and we hope you all enjoyed it as much as we did. See you at the 2020 NAB Show! And yes, we plan to bring signature purple light-up yo-yos once again.
As we enter count-down mode for the 2019 National Association of Broadcasters Show (NAB Show) in Las Vegas, I’d like to share a little secret with you. I’m not a gambler. If you read my blog last year, you probably already guessed this.Protecting Your Assets: The Stakes Are High
Just how high are the stakes? No matter your industry, organization or data, you could jeopardize content as well as company earnings and reputation if data is not properly secured and cannot be accessed when needed. When you are storing and working with pre-production video footage, how many hours and dollars did it take to record? That price skyrockets even higher once your master asset has been produced and finalized.Optimizing Collaboration and Storage
When you research storage solutions, are you also considering how your data will be used in creative collaboration processes? Long gone are the days when you could store assets in data storage silos and expect to keep your workflow moving efficiently. At the NAB Show, Media & Entertainment (M&E) IT professionals from Studios, Production Houses, Broadcasters and Service Providers at the NAB Show will once again be looking to optimize storage and access at every stage of the digital asset lifecycle—from production to delivery to long-term preservation.What’s New at the Caringo Booth This Year?
Over the past year, we have remained committed to making certain you can store what you need, ensure media integrity and keep assets online and accessible. Caringo provides S3- and NFS-accessible object storage designed for content access, distribution and archive. It can be purchased as software only and run on any x86 server, or you can now purchase it in an on-prem S3-compatible single Swarm Server Appliance.
In addition, we will be showcasing version 3.0 of our FileFly Data Management Tool, which now enables you to move data from NetApp and Windows Filers to Amazon (AWS), Microsoft Azure and Google cloud as well as to Caringo Swarm Object Storage. (Make sure to ask us about the new Free 25TB Community Edition of FileFly.)What Can I Learn at the Caringo Booth This Year?
Our expert storage architects and engineers will be on hand to help you understand how Caringo’s object storage technologies can help you conquer the challenges of scaling storage in the on-demand world. Our experts can show you:
- Internal streaming and longtail video-on-demand (VOD) directly from the archive layer that’s lower cost than cloud-based services.
- Tape replacement that provides guaranteed content availability with minimal administration.
- Geo-dispersed collaboration platform that plugs into your asset manager or can be used as a stand-alone solution.
- Single target for multiple data sets on a future-proof platform that delivers unlimited scale.
Coming to Vegas early? Get a preview over beer and pizza Saturday or Sunday at the JB&A Pre-NAB Technology Event. Register now.
Then, stop by our booth at the NAB Show expo (#SL13310) to learn more. You can also visit https://www.caringo.com/solutions/media-entertainment/ or contact us to schedule a demo.
The post NAB Show 2019: With Object Storage, You Don’t Have to Gamble appeared first on Caringo.Related posts:
Last week, I blogged about both traditional backup strategies and some of the newer paradigms for data protection. Here at Caringo, we provide our Swarm object-based storage (available as software only or as a complete software and hardware solution, our Single Server Appliance). Our products have an interesting relationship to the traditional notions of backing up data. On the surface, data protection looks very different in our world; but in reality, many of the same principles still apply.⏤The 3-2-1 Backup Rule
You are probably familiar with the 3-2-1 backup rule. It dictates that you have three or more independent copies of your data, you store the copies on two different types of media and you maintain one of those backup copies off site.The Principal of Maintaining Redundant Copies
For example, backing up data is all about having redundant copies of your files so that if the primary copy is destroyed you have an option to fall back on. In our Swarm object storage technology, the software is continuously monitoring the data to ensure multiple copies are available at any given time.Maintaining a Copy Off-Site
A best practice for backing up data includes keeping at least one copy off-site, and many object storage solutions (ours included) make it easy to do just that. In Part 1 of this blog, I noted that there are a lot of off-site storage options for backup data⏤from tape to an operational disaster recovery site that can feed data back at a moment’s notice. And of course, there are cloud solutions like Amazon (AWS), Google, Azure or a private cloud, such as those that can be built with Caringo Swarm.Cost Considerations for Protecting Data
With low costs for “cold” storage and high levels of baked-in protection, the popularity of cloud solutions is really not all surprising. But for companies with legacy systems, determining how to protect their data in the cloud may seem a daunting task. (Check out our Protecting Data with Caringo Swarm Object Storage whitepaper to learn how our built-in, continuous data protection works like “a ship carrying and protecting your data as the river of hardware changes over time.”)
The challenge of determining how to protect data in the cloud was one of our motivating factors at Caringo for the recent release of our FileFly 3.0 Data Management Tool, which now allows for automated backing up of data to a Swarm cluster or to AWS, Google or Azure clouds—according to your policies and your timing.Policy-Driven Data Management with FileFly
With FileFly, data movement is governed by policies that describe the protection needs of the data as well as which back-end data store or stores best fit the business needs for that data. For example, frequently accessed data can be backed up every few minutes to both an object storage cluster and to the cloud, while infrequently accessed data is perhaps sent to cold storage once a week. FileFly allows for these sorts of policies to be defined in advance so administrators can move on to other tasks without having to worry about overseeing arduous backup tasks. It’s your data, protected your way.Learn More
To learn more, watch our Tech Tuesday: Using FileFly to Manage Your Data with Azure, Google, Amazon or Swarm webinar on demand or contact us to talk to a storage expert or to schedule a demo.
Over the last several years, it’s been encouraging to watch the popularity of World Backup Day grow over time. While the day is designed to encourage individuals to back up their personal data, it is also a concept critical to employ in the business world. I recently read a StorageNewsletter post that stunned me with stories of businesses large and small who had been severely hurt (or ruined) by lack of an adequate backup solution. With data security being central to the success of companies, we would all be wise to examine the effectivity of backing up or protecting critical business data!
In homage to this year’s World Backup Day, here’s a quick review of the current state-of-the-art in backup methodologies for businessesBacking Data Up to…Where?
There are a lot of off-site storage options for your backup data. Some people choose to have a physical closet packed full of tapes at the cost of high restore latency, while others choose to have an operational disaster recovery site that can feed data back at a moment’s notice. Still others choose to use a cloud solution, whether AWS, Google, or Azure.Backing Up What Counts
No matter where you choose to store your data, there will always be a cost associated with it. One of the big questions to ask, then, is how to balance protection characteristics (durability, availability, latency) with the invoice every month. One easy way to reduce that bill is to make sure you only backup the data you really need.
After all, if you’re an IT administrator and your users have home directories full of cat pictures and other clutter, do you really need to incur the time and expense to protect those files? Or would that be better spent on more/better/faster protection for your critical business data?
Top-tier backup solutions allow administrators to define policies beyond a wholesale copy of a drive’s data. Rather, they let admins set fine-grained policies to only deal with the data that matters.Knowing Your Data
Some data can tolerate having the last few hours’ (or days’) worth of changes be lost in a disaster, but some data cannot. It is important for the owners of this data (and the administrators tasked with protecting it) to have some sense of the loss tolerance of their data.
These different protection requirements have also pushed backup solutions away from large batch protection at set intervals (such as an administrator sitting by a tape deck, waiting to swap out cassettes and push “resume” for hours on end). Instead, data protection now happens on a more continuous basis and for many IT organizations has replaced the concept of “backing up data.”Learn More
As Warren Buffet famously said, “in order to succeed you must first survive.” Protection of ever-expanding data sets has become absolutely critical to ensure that survival. Join us in celebrating World Backup Day!
If you have questions about protecting your data, please contact us. And watch for part 2 of this blog where I explore how you can easily protect your business data by incorporating Caringo Swarm, our field-hardened S3-compatible storage platform, and using Caringo FileFly Data Management Tool, which provides you with the option to move your data into Caringo Swarm, Amazon (AWS), Microsoft Azure or Google cloud platforms.
Formats and methodologies for storing and distributing information continue to evolve. This means that the tools we use to manage and move data must also grow. When the Farmer’s Almanac originated in 1818, it was printed using a printing press. Fast-forward 200 years, and now it is available online as well as printed (using far more sophisticated technology). Having the trusty Farmer’s Almanac on the Internet, we can easily use Google to discover that “astronomers and calendar manufacturers alike now say that the spring season starts one day earlier, March 20, in all time zones in North America.” (If like me, you thought today was the first day of spring, don’t feel bad.)The Origin of Caringo FileFly Data Management Tool
FileFly was the first product launch I worked on when I joined the Caringo team in 2015. When it was launched, we referred to it as Caringo FileFly for Swarm—as it plugged into NetApp and Windows file servers so organizations could combine the performance of file servers with the scalability and economy of our Swarm object storage software platform. And, it did this without affecting existing mount points or applications.Award-Winning FileFly Adds Support for Amazon (AWS), Microsoft Azure & Google Cloud
Let’s fast-forward again. This time, just 3 and a half short years. After winning a TechTarget Search Storage Silver Award in 2017 and being awarded 5 stars by Brien Posey in a TechGenix review, FileFly has bloomed into a full-fledged Secondary Storage Solution that offers multi-cloud support for Amazon (AWS), Microsoft Azure and Google Cloud as well as our Caringo Swarm Object Storage (now field-hardened at version 10).What’s New in FileFly v3.0?
Like earlier versions, FileFly 3.0 still enables organizations to scale storage to multiple petabytes and trillions of files, optimize filers, and consolidate files across multiple locations. This allows organizations to take full advantage of public, private or hybrid cloud infrastructure. And, it still has no application changes and provides transparent end-user file access.
With the innovations in FileFly 3.0 (launched in 2018), it is the only solution you need for complete, automated data lifecycle management of all your unstructured data—from creation to preservation. To keep it simple, it remains policy-driven so you select the level of file data movement based on your business requirements. This makes it simple to consolidate data on a single platform, to add a remote disaster recovery (DR) site or to integrate seamless transparent file backup with smart provisioning of primary storage.FileFly Community Edition
The need for this type of data management tool that provides the ultimate flexibility for moving your data led us to offer the FileFly Community Edition, a full-featured 25 TB license at no cost. We wanted to give organizations an easy way to try FileFly 3.0, with no upfront investment or long-term commitment. (Email us at firstname.lastname@example.org for more information on obtaining a FileFly Community Edition license.)Learn More About FileFly
Register now for our March Tech Tuesday webinar: Using FileFly to Manage Your Data with Azure, Google, Amazon or Swarm. March 26 at 7am PT/10am ET, Senior Consultants John Bell and Tony Lokko will explain how FileFly works and give a live demonstrate. Throughout the presentation, they will take questions so you can learn from their extensive experience in architecting, deploying and managing storage solutions.
The post Caringo FileFly Data Management Tool Reaches Full Bloom appeared first on Caringo.Related posts:
30 years ago to the month, CERN computer scientist Tim Berners Lee published a proposal for what would become the Internet. It took another 2 years for the first web page to be published by CERN. After that, things took off. A new platform for mass communication using text and images over Hypertext Transfer Protocol (HTTP) was created. Fast forward 30 years and the same protocol and platform is being used to deliver video at an astounding rate. This arguably started with YouTube (2005) but didn’t take off until the entire ecosystem of streaming services, in-home streaming players, broadband and mobile devices caught up (around 2010).
We are now all conditioned to expect the delivery of video—regardless of our location—in an “on-demand” fashion. However, a lot of the underlying workflows and enabling applications and infrastructure don’t natively interface with HTTP. So how do you enable HTTP in existing workflows?HTTP Streamlines On-Demand
Media players aside, the most important part of offering content on demand is the ability to deliver content over HTTP. That’s what Content Delivery Networks (CDNs) enable, often with edge devices that are closer to the point of consumption for faster delivery to a very large audience. You can also deliver content over HTTP with a few layers of technology including a load balancer, web server and network attached storage. Or, you can implement an object storage solution that wraps load balancing, web serving and storage all into one platform.Most Applications Were Designed for File Systems
Most applications used in the creation and management of content were developed to support file systems and their protocols (NFS and SMB), not HTTP. This makes sense given that file systems and file system protocols are the way we have all edited and collaborated on files for the past few decades. (Learn more about the differences between File Storage vs. Block Storage vs. Object Storage.)
The good news is that a lot of applications are now supporting cloud-based services and their protocols (all based on HTTP), the most popular being Amazon’s S3 protocol. That doesn’t mean that you can stream content directly from these applications. It simply means that you can now output content created from these applications to Amazon S3 or another storage platform that supports the S3 protocol.Content is Being Reused, Archive Must Adapt
One of the major hurdles content-driven organizations are faced with is the ultimate need to continue to reuse content. Tape is still the primary method of long-term preservation because from a straight $/GB comparison it’s the most economical of all storage mediums. But in our new on-demand world, what organizations are saving in storage costs they are losing in increased business and opportunity costs. Accessing content stored on tape is more complex than you might think as it is usually a manual process that requires significant time and effort. Project files may span multiple tape drives, and even if only a few seconds of content are needed, the entire file needs to be recalled. A solution that is starting to gain traction is using object storage as a tier of storage between the high-performance tier and the archive tier. With object storage, you can also deliver content within your internal network or via a private URL the same way you can deliver content to the public over HTTP.
This is only a high-level view of the challenges faced when enabling on-demand. Every organization’s environment and requirements are different. The good news is that most application developers are integrating S3 support and there are a number of companies that you can call for help (including Caringo!).
If you want to learn more on how to enable on-demand capability in workflows, join Ryan Meek, Principal Solution Architect, and Jose Juan Gonzalez Marcos, Sales Engineer, on our upcoming webinar: How Storage Streamlines Workflows in the Era of VOD/OTT. You will learn how to stream content directly from storage, how to set up multi-site content distribution for collaboration, how to unify file system (NFS) and HTTP (S3) workloads, and the pros and cons of using object storage vs tape.
With the rise of object-based storage software solutions, you’re likely familiar with the many benefits of integrating this latest technology into your data storage infrastructure.What are the Benefits of Object Storage?
Of course, different object storage solutions vary in their capabilities and methodology. Frequently, you will see the following benefits with a best-of-breed solution:
- Improved productivity with data portability between various protocols (S3, SCSP, HTTP and HDFS)
- Expanded search capabilities using the power of metadata to query & list based on file & object characteristics (watch Using Metadata with Object Storage webinar to learn more)
- Lowered Total Cost of Ownership (TCO) of data storage and distribution at scale (scale up/scale out on commodity infrastructure)
- Protection for your data (check out this whitepaper to learn more Protecting Data with Object Storage)
As we discussed in our Back To Basics Blog: What is Object Storage?, there are three storage technologies available in the market today:
- Object Storage
- File Storage
- Block Storage
File and block are the most common data storage methodologies and are used by traditional storage technologies such as tape storage, storage area networks (SANs) and network attached storage (NAS).
Beyond the technology differences in how SAN, NAS, Object Storage and Tape architectures are designed and function, there are a number of distinguishing capabilities that vary widely. This chart details some of the most critical aspects to consider as you determine how to architect your data storage:
The chart below illustrates what type of storage is recommended for various use cases. For example, if you are running a highly transactional workload, you will want to use SAN or NAS as your primary (or “tier 1”) storage. However, if you must retain data for legal purposes but do not expect to need to access it again, a cold archive of object storage or tape is appropriate:
Even with these benefits and many more, the necessary work to migrate data from traditional storage area networks (SANs) and network attached storage (NAS) to object storage can seem overwhelming. So, what are the options for migrating to object storage?
- Direct Integration: This is used for applications that are designed with a RESTful API to use object storage. It’s ideal for proprietary or home-grown applications as they extract the maximum performance and functionality of the object storage solution. This requires on-going development resources and generally uses S3 as the API of choice.
- Data Manager (archive, backup, asset managers and gateways): This is usually easy to deploy. Considerations include protocol or OS support, namespace, metadata control and, of course, cost.
- Manual Migration (rsync, cron job, copy & delete): You can use commands to orchestrate manual migration of data and it is typically low cost. This method is ideal when decommissioning old infrastructure and apps. It can also be used as part of a scripted backup process. Considerations include migration validation and error correction.
- Professional Services: You can always call in a team of professionals for your migration project. While engaging with experts is the most costly, it is also often the most effective. Professional Services are recommended when you need to quickly move away from a proprietary technology.
What Else Should I Consider Before Migrating to Object Storage?
You will always want to think about the individual requirements for your data store, particularly bandwidth, authentication & authorization, as well as the your ongoing costs, management and maintenance.
In our work at Caringo, we’ve helped hundreds of organizations successfully implement object-based storage, busting through storage silos and eliminating what our Director of Product Eric Dey refers to as “the tyranny of file systems.” If you have questions or would like to schedule a custom demo, contact us today.
The post How to Migrate to Object Storage from SAN, NAS and Tape Storage appeared first on Caringo.Related posts:
As I start to create my “to do” list for my annual trip to Las Vegas for the JB&A Pre-NAB Technical Event and the NAB Show, I quickly jotted down, “download movies to ipad.” And, the first movie that I associate with with Vegas is Dodgeball. Sure there are some more well-known titles (Oceans 11, the Hangover, Leaving Las Vegas, etc), but, nothing makes me chuckle like the team from Average Joe’s Gymnasium taking on the jerks from Globo Gym. And, it stills blow my mind that I can access any movie or show whenever and wherever I want.Movies & Object Storage
For those not familiar with the movie Dodgeball: A True Underdog Story, it is the hilarious story of a group of misfits entering a Las Vegas dodgeball tournament to save their cherished local gym from the onslaught of a corporate health fitness chain. Quickly, my mind wandered to how the movie reminds me of Swarm, our object-based storage platform. Wait. How does a movie (that should have won an Oscar) compare to storage?The Origin of Object Storage
My first thought is of one of Patches O’Houlihan (played by Rip Torn) who is one of the early stars of the professional sport. This is similar to Caringo and its role in Object Storage technology. As the pioneer in the space, we developed many of the capabilities that are used by all object storage vendors today. And, like Patches, Caringo is the trusted advisor to customers and resellers who enter into this lesser know “sport.” (Check out our Tech Tuesday webinar series, where we cover everything from evaluating object storage to when to choose object storage over NAS for digital video workflows.The Five S’s of Storage
One of Patches key teachings was, “If you are going to learn to become true dodgeballers, then you’ve got to learn the five d’s of dodgeball: dodge, duck, dip, dive, and dodge!” This mantra can easily be changed to represent a proper storage platform for an M&E environment or any type of environment dealing with large amounts of content. If you are going to provide content storage, then you’ve got to have the five s’s of storage: Security, Scalability, Storage Administration, Searchability, and Security. You can learn more how Caringo Swarm can help with the s’s here on our Media & Entertainment Solutions page. In short, Swarm provides a highly secure, easily accessible, scale-out object storage platform that’s makes it easy to manage and protect content.Teamwork Makes the Dream Work
As the movie progresses, the team adds another member, their lawyer/former softball player Kate Veatch (played by Christine Taylor). This reminds me of how Caringo has partnered with JB&A Distribution.
The JB&A Team is dedicated to bringing the most innovative and complete solutions to market and is staffed with industry experts. JB&A provides an ecosystem of certified, tested and proven products and workflow solutions that enable resellers to provide complete workflow solutions. Swarm object storage is a critical part of these workflow solutions, as Nicholas Smith, JB&A Director of Media Technology, explains in this episode of JB&A Today. Since Caringo integrates easily with Quantum, CatDV, SNS and a wide number of products that JB&A offers, it was a natural fit.Visit Us at NAB in Las Vegas
If you are going to the NAB Show, make sure you go a day or two early to visit the JB&A Pre-NAB Technical Eventas it is one of the best places to get hands-on experiences with leading technologies supporting digital video workflows.
And, make sure to stop by our booth (#SL13310) at the NAB Show. Need a pass? Just let us know. Now, I must get back to dodging wrenches as I prepare for my trip to Vegas. I hope to see you there.
Media & Entertainment (M&E) professionals from around the globe will descend on Las Vegas for the 96th annual National Association of Broadcasters Show (NAB Show) April 8–11. In addition to the M&E industry, the NAB Show is a magnet for IT professionals in diverse verticals such as government, education, and security surveillance who need scalable, S3-compatible storage that enables secure file distribution, access, tape replacement and OTT/VOD (Over The Top/Video on Demand) workflows.Content Protection & Metadata Search Capabilities at Scale
From our field-hardened Object Storage platform to our innovative tools for tiering and moving data, Caringo has been at the forefront of helping organizations around the world effectively manage and monetize their data since 2005. Just as critical, Swarm provides built-in data protection and powerful metadata capabilities. Learn more about the ease of searching and accessing data in Swarm on our next Tech Tuesday webinar where Solution Architect Ryan Meek will take a deep dive into our world-class metadata features.Swarm Single Server Storage Device
Over the years, Swarm object-based storage software has been used as a highly scalable platform that can be run on any x86 server hardware. While that is still a popular option, we’ve introduced a simple way for organizations to get started using Swarm by offering it on the Swarm Single Server Appliance.
Swarm Single Server was designed to meet the needs of small- to medium-sized content-driven organizations. This on-prem, S3-accessible, object-based storage device contains all the hardware and software organizations need to keep archived content online, searchable, web-accessible and secure. You can start with only one-quarter of the hardware that a typical object storage deployment requires and then add capacity by just plugging in another appliance.How Storage Streamlines Workflows in the VOD/OTT Era
Want to learn how storage streamlines workflows in the Video On Demand/Over The Top (VOD/OTT) era? Join Adrian “AJ” Herrera, VP Marketing, and Jose Juan Gonzalez Marcos, Sales Engineer, at 9am PT March 19 for a live webinar.
The post Scalable S3 Storage & Video Streaming Hit Sin City appeared first on Caringo.Related posts:
It’s February, spring is here, and love is in the air. So why not share a little of that love with FileFly? FileFly 3.0 is the latest version of our Windows and NetApp archival tool and it can now be used to move data to Microsoft Azure, Google Cloud, Amazon AWS or Caringo Swarm Object-Based Storage.
We launched FileFly in 2016 and released version 2.0 expanding its capabilities in 2017. FileFly won a Silver medal in the TechTarget 2017 Products of the Year Awards: Data Storage Management and was also reviewed and awarded a Gold Star by TechGenix.com.
Seeing how it helped our customers with efficiently moving data, we wanted to expand the capabilities of the tool and broaden how and where it could be used. Thus, the latest chapter of our of our passion for solving users evolving long-term data retention and access needs begins.The Next Chapter for Windows & NetApp Archiving
FileFly 3.0 opens a new chapter for our Windows and NetApp Archival product. Version 3.0 introduces features that provide increased flexibility so you can better manage and share your data. Licensing is now measured on front-end usable capacity rather than tied to the Swarm license capacity, so you are free to choose alternate storage destinations for datasets or multiple Swarm clusters. Yes, now you can use FileFly to tier your data to the “Big 3” (AWS, Azure, and Google Cloud) as well as to Caringo’s Object Storage solutions.
Using FileFly’s policy-based tiering approach, you can send “warm” data to a Swarm cluster made up of high-performance hardware while the cold archive data that may never be touched can be routed to a slower cluster for long-term retention.
For the first time, you can have FileFly use cloud targets as storage endpoints along with Caringo Swarm. Need something more to love? Now qualified organizations can try FileFly out for FREE with a 25TB Community Edition. Learn more by visiting our FileFly Product Page.Putting it All Together
You can have your existing FileFly + Swarm cluster running FileFly 3.0, storing some data on-premise and some on a cloud location. Then, you can trial a smaller deployment of 25TB at a branch office using only a cloud destination as a storage target. When you use FileFly with Swarm Software, our data durability and deployment flexibility provide you with a rock-solid solution for archive.The Power to Choose Your Cloud Provider
Not sure which cloud provider you want to use? You can use the same server to post some data to Amazon, some to Azure and some to Google Cloud. Since the FileFly license is no longer coupled to the Swarm cluster, you can write some of that data back to your main Swarm cluster from the branch office. Sweeter yet, our end users can still work with and manage the files in Windows or NetApp as they always have.Watch FileFly in Action
FileFly is the solution you need for complete, automated and flexible data lifecycle management of unstructured data—from creation to preservation. Working with both NetApp and Windows filers, FileFly allows you to set policies that guide the level and target of file data movement based on your organization’s requirements.
Join me and John Bell on our March 26 Tech Tuesday webinar at 10am ET/3pm GMT. We will explain FileFly’s capabilities and conclude with a live demo and Q&A session.
You can bring your questions to the webinar for our live Q&A, or feel free to send them to us in advance at email@example.com.
The post Share the Love with FileFly Data Storage Management appeared first on Caringo.Related posts:
Having beat the metadata drums for more than a decade, it’s tempting to look back and bask in the greatness of how right we were
—and how right we still are.
However, when we set this course for our object storage product development at Caringo, metadata was thought to be a tangential and unnecessary feature to many in the storage and IT industry.
Let’s just say that the software for this feature didn’t write itself and it took the continuous dedication of our team over a long span of time, but we knew the benefit of it would be invaluable for those implementing our object storage.What Is Metadata and What Does Metadata Do?
Jason Scott perhaps said it best, “Metadata is a love note to the future.” Metadata is data that describes data. So it doesn’t ‘do’ anything by itself. But it does enable the best decisions in the future. To that end, we guarantee the integrity of the metadata and service requests for the metadata without having to read the object itself. Metadata is then indexed for millisecond access, even with billions of objects. (Watch the Power of Metadata in Object Storage video for more information.)Why Use Metadata for Object Storage?
Now, the world demands the universal and near instantaneous access to the right set of data for the right requests and the right jobs. This is particularly true when it comes to digital video workflows, as our VP of Marketing Adrian “AJ” Herrera pointed out in a recent blog. With Caringo Swarm Object Storage, we meet that demand through a field-proven, world-class object metadata service delivered on the Elastic stack.Dynamic Sets of Data on Object Storage
Beyond the retention and search services, we’ve added the power to define and operate on dynamic sets, which we call collections. Think of a smart folder on your email client that can show you all the emails that currently match a set of conditions. Then, add the power to perform actions on sets and you have a force multiplier at scale. (Watch Searchable Metadata & Collections video for more information.)What About File Access to Data via NFSv4?
As much as we love object-based storage, we know that there are still plenty of cases where the right tool is file access. File level access over the network is provided by SwarmNFS as it serves access to buckets, as well as to collections, providing a dynamic and continuously updating view of the cluster.Learn More
As data sets have continued to grow from Terabytes to Petabytes and from thousands to billions of files, the benefits that metadata delivers (such as locating the right data quickly and delivering content efficiently to users) have become critical for business success. Join me and John Bell as we dive into our world-class metadata features and usage in our next Tech Tuesday webinar, February 26 at 11am PT/2pm ET.
You can bring your questions to the webinar for our live Q&A, or feel free to send them us in advance at firstname.lastname@example.org.