We recently certified ChronoSync by Econ Technologies with Caringo Swarm, our hassle-free, limitless object storage software. ChronoSync is a robust file synching application for macOS that can synchronize and backup to almost anything you can connect to your Mac. This includes Swarm, cloud services, other Macs, NAS/external drives, iPads, iPhones, PCs or anything you can mount as a volume. It can also perform bootable backups to external drives connected to your Mac or remote drives. A quick overview of functionality includes the ability to:
- Sync: Synchronize files Mac to Mac, Mac to Cloud, Mac to PC, Mac to NAS and, of course, Mac to Swarm
- Backup: Create copies of files to prevent data loss
- Recover: Clone your hard drive so you can quickly recover from catastrophe
- Sync to Cloud: Transfer files to cloud storage so you can access them from anywhere
- Monitor: Take advantage of logs, email, and push/system notifications to track tasks
- Schedule: Run tasks as an event occurs or at any time interval
The team at Econ Technologies put together an informative video explaining their recommended 3-2-1 backup strategy (3 copies in 2 different media formats with 1 of those copies offsite). You can learn more about this strategy and view a quick ChronoSync demo by watching this video:
Learn more about ChronoSync here. Integration with Swarm storage is through our S3 API support. If you have any questions, don’t hesitate to contact your Caringo rep or visit our contact page for more ways to request information.
The post Sync Mac Files with Mobile, PCs, Cloud NAS & Swarm appeared first on Caringo.
Although Flash, NAS and SAN may be the current “go-to” solutions for storage in the High-Performance Computing (HPC) world, that storage paradigm has proven to be limited when it comes to dealing with the rapidly growing data sets of the 21st century. As IT Pros search for cost-effective storage that easily scales with their HPC needs, object-based storage solutions are often at the top of their list.
Tuesday, July 17, I’ll be hosting our monthly Tech Tuesday webinar and this month’s topic is focused on Object Storage for HPC. Pat Ray, Caringo Swarm Integration Engineering Lead, will be my featured guest.
As Pat and I have been developing the webinar, we’ve talked a lot about those pain points and how a “cloud approach” for HPC storage helps to alleviate those issues. This approach introduces valuable benefits to a wide range of public and private organizations including educational and research institutions, laboratories, businesses and others faced with handling relentless data growth. You will find a matrix of pain points and how object storage helps address them in the table below:
Pain PointHow Swarm HelpsBenefits/ResultsPain Point How Swarm Helps Benefits File system limitations (e.g., billions of files, complex directory structures, fixed block sizes etc.) No dependence on file systems (inodes, blocks sizes etc. do not matter)Objects are managed and located based on their characteristics (metadata), not where they live on the file system (directories/paths) Easily scales to billions or more objects in storeData with same or similar characteristics can be easily queried and collated into dynamic collections (saved queries), regardless of where it “lives” in storage Data integrity concerns (multiple backups/copies usually needed) Uses auto-correction to prevent corruption of objectsOffers flexible data protection schemes (Learn more about replication and erasure coding options) Continuous built-in data integrity checking and protection delivers multiple “nines” of data durability while optimizing data footprint Inefficient parallel access and storage silos Global Unified Namespace (replacing hierarchical file systems)Stateless Gateways handle multiple protocol personalities
Cloud enablement (S3 and Azure compatibility) Lack of multi-tenant access and/or quota support (typically, data pools into single tenant apps and databases) Multi-protocol/Multi-tenant access is already built in (not tacked on)“Fine grained” access control with comprehensive Authentication/Authorization support Enable collaboration throughout your user base with secure multi-tenancyEliminate storage silos and build an active archive for your data Lack of “web-accessible” storage (percolating data to web shares causes overly complex web/app/database tier deployments) Collapse traditional web/app/database/storage layers into a simplified and streamlined RESTful/web access method Easily support multi-protocol access (next-gen applications and workflows) Difficulty operating on data subsets locally (object metadata is trapped in a database/uncoupled from the data) Metadata and data are combined (metadata + data is created/stored as an “object”) Both data and metadata are managed and protected by the storage system over the object’s lifecycle
We will discuss all of these pain points as well as considerations when selecting and deploying an object storage solution for an HPC environment Tuesday. Make sure to reserve your seat for the webcast and live Q&A, or register now and you will be notified when the broadcast recording is available on demand.
What better way to celebrate the 4th of July than to upgrade to the latest Swarm release? The Caringo team has worked tirelessly over the years to stay ahead of your storage management and data protection/access requirements, and Swarm 9.6 is a great example of how we are continuously innovating and adding new functionality between major release cycles. Enhancements in version 9.6 include:
- User Interface (UI) updates — The Swarm UI includes new capabilities around object renaming and version management allowing more precise content control by incorporating the full expressiveness of the native storage API within the UI actions. Additionally, the UI is updated to support the latest release of SwarmNFS.
- Next-Generation Replication Method — Swarm Storage now supports a new replication method, replication by direct POST. This new replication technology offers better performance and flow management between source and target clusters, so that you can increase the rate (for faster hydration) or lower it (to prevent swamping a smaller target).
- Platform Command Line Interface (CLI) Expansion — The set of CLI commands for Swarm Platform server has been extensively expanded and simplified to improve system administration.
- Documentation — The Swarm documentation has been reorganized by audience use case: Deployment, Administration and Development. Additionally, a new Deployment Planning section has been added that includes best-practice guidance and numerous architectural examples for successfully implementing Swarm.
Swarm version 9.6 is now available for download on Caringo Connect for all customers with a paid support contract. For more detailed information, registered Caringo Connect users can review the Swarm 9.6 Release Notes. If you are evaluating scale-out storage for your organization and would like to speak with one of our object storage experts, please contact us. We would be happy to answer your questions and schedule a custom demo with you.
The post Avoid Storage Fireworks with Swarm 9.6 Object Storage appeared first on Caringo.
As the data deluge continues to escalate, retaining, protecting and accessing data for organizations and businesses becomes more and more of a time-consuming, budget-busting, headache-causing “albatross around the neck.” Unfortunately, this proverbial albatross is not limited to just certain endeavors. It tends to be a horizontal issue that resonates across all vertical market segments.
Caringo Swarm hassle-free, limitless storage was designed to be an “open” way of storing content, accessible via a RESTful interface over HTTP. (See our Swarm Technology overview page.) In 2006, Swarm (originally called CAStor) was ahead of its time. So, when other companies were jumping on the bandwagon, Caringo engineering had field-hardened the product, built a vault of IP, gained expertise with many use cases and developed best practices for integration, deployment and data management. Over the past dozen years, we have helped numerous customers store, organize and access massive amounts of unstructured data.
Here’s a breakdown of some of the vertical market segments where Caringo object storage flourishes, the benefits Swarm provides in these types of use cases, and a link for additional information.Vertical Swarm Benefits More Info M&E (VoD, Streaming, Active Archive, Digital and Media Asset Management, pre/post-production) Secure content delivery and distribution with hassle-free scalability on any mix of standard hardware, bringing the operational benefits of a secure cloud to your data center. M&E solution page
M&E solution brief HPC (Machine-generated data and multi-protocol storage and research) Provides massively scalable multi-protocol access that supports both traditional and new applications. Rapid search of billions of files with customizable metadata and robust multi-tenancy enables metered and monitored collaboration. Facilitates shared access to large datasets with integrated NoSQL search and programmatic or web-based metadata customization. HPC solution page
REGISTER NOW Enterprise IT (Tier to Public Cloud and Windows Server & NetApp optimization) Eliminates data silos, reduces (and often eliminates) the need for backup and supports the new paradigm of storage while still maximizing investment in existing infrastructure. Enterprise IT solution page
TechGenix Review of Filefly Federal, Government & Law (Intelligence & evidence, video surveillance, geospatial data) Ensures that evidence has not been tampered with using compliance features such as WORM, Legal Hold, Integrity Seals and the ability to audit access logs. City of Austin case study
Facilitating Information Governance webinar Healthcare/Medical Integrates with popular PACs, including Acuo and Dejarnette, and protects from accidental medical image deletion via WORM. Disassociates filename or patient-identifiable information (enabling HIPAA compliance). Medical solution page
Would your organization benefit from adding an object storage solution? Contact us and we’d be happy to discuss your specific requirements and help you determine if Swarm object storage is right for you.
The post Object Storage Solves Data Retention and Access Issues appeared first on Caringo.
Next week, I have an exciting “first” in my career: I’ll be in Frankfurt representing Caringo at the prestigious International Supercomputing event (ISC). ISC is focused on addressing High-Performance Computing (HPC) technological development and its application in scientific and advanced commercial environments. And of course, all of this is made more exciting for me because England’s World Cup matches are the day before ISC and the day after!
Part of the attraction for me in coming to Caringo was the potential I saw for Caringo Swarm in the verticals I know best: Pro Sports and Media & Entertainment (M&E). Seeing the diversity of the customers that Caringo’s technology is empowering was a revelation for me: Caringo Swarm has been chosen by federal governments, healthcare organizations, international cloud providers and most intriguing for me—High-Performance Computing (HPC).
I’m probably not alone in having had the preconception that HPC simply needed high-performance storage. When I joined Caringo, I quickly learned from my new colleagues that the massive scale of data in a supercomputing environment needs more than just performance, and that these hyper-scale customers are just as focused as those in other verticals on lowering storage TCO. The value of Swarm to those in HPC includes multi-tenancy and compliance; and secure data access that simplifies workflows is just as highly valued. Download this solution brief for more information.
Caringo has garnered impressive credibility in the HPC arena as our flagship product Swarm was recently chosen to be an integral technology in powering one of the UK’s prestigious (and hyper-scaled) environmental super-data-clusters.
Of course, Caringo also has existing deployments at national laboratories in the US, higher education research centres around the globe, and many Federal organisations that are responsible for data collection and analysis. As “supercomputing” becomes more prevalent in the commercial space, innovative technologies like Swarm that are capable of handling the scale and throughput of massive data sets while providing advanced methods of accessing and manipulating data are truly coming of age. Recently, Caringo Co-founder & CTO Jonathan Ring opined that object storage was becoming the new tier 1 storage and Adrian “AJ” Herrera, VP of Marketing, explored how object storage now competes with parallel file systems.
I am looking forward to showcasing the Caringo software suite to delegates from the scientific research space as well as industries such as automobile, aerospace and healthcare, as well as the many other verticals where Caringo has growing footprints. We will have a team of System Architects, Technical Sales, reseller and technology partners at ISC. Given the highly technical audience, we will conduct whiteboard architectural sessions and software demonstrations. If you are planning to be at ISC, feel free to email me directly to arrange a time to meet with us to learn what Swarm can do for you.
Although it is not October and we are not in Bavaria, we will still be raising a toast each afternoon at the Caringo Stand (K-413). Our happy hour is co-sponsored by our mates at Boston Limited and it will kick off Monday at 4 pm and Tuesday and Wednesday at 3 pm. I look forward to seeing you there!
The post The World Cup of Supercomputing: ISC2018 & Object Storage appeared first on Caringo.
I often joke with co-workers that life is never boring at Caringo. This week has been particularly “not boring.” In fact, it has been downright exciting. Yesterday, we announced that our VP of Product Tony Barbagallo will be stepping into the role of President and Chief Executive Officer and that our Co-Founder Jonathan Ring will be returning to his passion and technical roots as our Chief Technical Officer.
While leadership changes sometimes create discord, when you have a proven leader with a track record who is also a known quantity, it is an easy transition to make. While Caringo may be 13 years old, our culture is still a “start-up.” As an organization that has been engineering-led since inception, it is our commitment to leading innovation in the software-defined storage (SDS) space and to providing our customers with best-in-class products and unrivaled support that drives us each and every day.
In an interview this week with Dave Raffo of TechTarget, Tony Barbagallo talked about where we are in the company’s growth and how we stack up to the newcomers in the SDS arena, particularly those focused on object-based technology. He said, “There are a lot of new vendors springing up, because object is becoming more popular and more mainstream, but they’re taking shortcuts. “
While we continue to be agile and respond to customer and market needs, we do so in a strategic and thoughtful manner to ensure we take care of our customer’s and their data. In the 21st century, data is the lifeblood of every successful business and government entity, and retaining it cost-effectively, while ensuring the data is continuously protected and accessible can literally make or break an organization. The right storage infrastructure provides a competitive advantage, and as the data deluge continues, that advantage will continue to become more and more critical. This is driving our strategy of expanding our ecosystem of technology partners and resellers.
Point in case—this week, we announced a technology partnership with Komprise to solve issues caused by the astonishing rate of growth of unstructured data such as documents, videos, photos and audio files with increased quality and resolution by pairing Komprise intelligent data management technology with Caringo Swarm, hassle-free, limitless storage.
We’ve long known that traditional NAS devices are a limiting technology and that, in time, organizations would need to turn to more elegant tools to expand storage capabilities and secure massive amounts of data on premise, eliminating the worry of ransomware and hacks. Komprise and Caringo have made that transition easy so you can reap the many benefits of object-based storage.
Join us June 21 for our webinar “Slash Storage TCO for Rapidly Scaling Data Sets” at 10 AM PT/1PM ET. This educational webinar will feature Glen Olsen, Caringo Product Manager, and Krishna Subramanian, Komprise COO. Register now to watch live or on demand to learn how Komprise and Caringo have partnered to solve today’s most pressing storage issues by pairing Komprise’s intelligent data management technology with Caringo Swarm hassle-free, limitless object storage.
The official start of summer may be June 21, but the weather in Austin is already sweltering at 100 degrees Fahrenheit and the hot topic at Caringo headquarters is high-performance computing (HPC) as our Product and Engineering teams continue to develop new features for our field-hardened products and the Marketing and Sales teams get ready for ISC2018 in Frankfurt later this month and SC18 in Dallas this November.
In many HPC use cases, data sets are increasing exponentially and the variety of items stored is staggering. From variable file sizes and the need to support collaboration from multiple sites and applications, traditional storage solutions are quickly becoming obsolete as object-based storage solutions replace them or augment their capacity.
To address these challenges, we’ve partnered with Boston Servers | Storage | Solutions Limiteds. As a leading system integrator, Boston focuses on delivering high-quality, reliable systems using the latest technologies available—and that includes our Caringo Swarm Hassle-Free, Limitless Storage Software.
These solutions are not just casually thrown together. Boston labs meticulously tests and evaluates all the latest HPC technologies to ensure their solutions utilise the most innovative and effective technologies on the market. Boston has chosen to work with Caringo on a number of large projects where they need massively scalable storage software that turns standard server hardware into a limitless pool of data resources that eliminates data silos and delivers continuous protection, multi-tenancy and metering for chargebacks. (Learn more by visiting our HPC Solutions Page on Caringo.com.)
On our next webinar, we will be featuring Konstantinos Mouzakitis, Senior HPC Systems Engineer at Boston, and Caringo Object Storage Solutions Architect Alex Oldfield. Having worked together on a number of implementations, these two seasoned engineers are joining forces in this educational webinar to explain the benefits of using object storage in various HPC use cases. In addition, they will take your questions live on June 12 at 8 AM PT/11 AM EDT/3 PM GMT/4PM BST. Register now to watch the webinar live or on demand after the event.
Both companies will also be exhibiting at ISC High Performance in Frankfurt June 24–27. ISC is the world’s oldest and Europe’s foremost HPC conference. Caringo will be at Stand K-413 and Boston at Stand C-1232. Our companies will be co-sponsoring happy hour at the Caringo stand June 25 at 4pm and June 26 and 27 at 3 pm. We hope to see you there.
And, make sure you have SC18 on your calendar and meet us in Dallas, Texas! We will be driving our trucks up from Austin, TX to meet you and bringing in object storage experts to help you architect the future of your storage over a nice, cold beer!
For the past few years, the value of object storage in the HPC space has been to provide economical storage that can be metered with multi-protocol support and interfaces to traditional and new RESTful applications. At the heart of the value proposition is being able to offload primary storage. In most cases, we are talking SSD-based devices running parallel file systems needed for modeling and rendering. That said, we are starting to see some new emerging use cases that, for read-intensive workloads, perform better than parallel file systems.
The primary requirements driving these new use cases are the need for scalable multi-protocol support to enable storage for variable file types/sizes and to enable access to various types of clients. I point at “scalable” here because these use cases are exposing the limitations of some object storage solutions based on underlying Linux file systems AND file-system-based solutions that provide an S3 interface.
As with any technology, implementations vary. For other vendors’ object storage solutions, we often see the underlying file system and reliance on beefy cache devices (to meet performance requirements) as the limit in scalable, multi-protocol support. In the parallel file system world, we often see the object interface layer, usually some form of S3 interface, as the bottleneck.
This leads us to why Caringo Swarm’s design (as detailed in this whitepaper) is quite advantageous for read-intensive HPC workloads. Operational benefits aside (like booting from bare metal and using any type or size of hardware), there are 3 high-level technical benefits to leveraging Swarm’s parallel architecture for HPC workloads:
- Optimized S3 protocol support with fast parallel uploading that leverages Swarm’s parallel architecture
- Pure object storage with no front-end caching; all nodes can handle all tasks
- True read/write/edit multi-protocol interoperability between NFS and S3
When you add these 3 benefits of Swarm’s architecture to the super-fast, low-latency networks and custom hardware configurations most HPC organizations have access to, you begin to solve the primary challenge for enabling collaboration—supporting variable file types and sizes and scaling data sets well beyond 100s of Petabytes.
If you are interested in learning more, we have a webinar coming up with our partner Boston Limited, a systems integrator focused on the HPC market. The webinar, titled Object Storage for High Performance Computing (HPC), will feature Konstantinos Mouzakitis, Boston Limited Senior HPC Systems Engineer, and Alex Oldfield, Caringo Solutions Architect. This is an excellent chance for you to have access to highly experienced technical resources and to ask questions. Register now to watch live or on demand.
The post Can Object Storage Compete with Parallel File Systems? appeared first on Caringo.
The author Oscar Wilde once said that “To expect the unexpected shows a thoroughly modern intellect.” And, with today’s explosion of unstructured data (⅓ of which is considered “sensitive”) in enterprise organizations, being prepared for the unexpected just may be the key to survival when it comes to facilitating information governance. Because, no matter how well you’ve architected your organization’s storage, there are times when you must make changes to your current workflow, whether necessitated by business, industry or government regulations. Mastering control of these ever-expanding enterprise data sets is not just an IT issue; it often affects all aspects of an organization.
Last week, I had the pleasure of hosting a webinar with Jacques Sauvé, Information Governance Professional (IGP) and Director Partner Enablement at NetGovern, and Caringo’s Global Sales Director Ben Canter that focused on helping executives new to information governance understand the steps in electronic discovery as well as the importance of providing a secure archive. Some of the statistics presented were mind-boggling to say the least:
We all know the serious implications of failing to comply with governmental regulations: HIPAA/HITECH, PCI, Freedom of Information Act, GDPR, and many more. And, compliance for large, rapidly scaling data sets requires tight integration between information governance and underlying storage infrastructure.
I invite you to watch this webinar on demand to learn more.
The post Expecting the Unexpected: Facilitating Information Governance appeared first on Caringo.
The world of storage has changed radically. The transition of fixed content from shared network drives and home folders is overshadowed by a massive increase in machine- and user-generated data (audio, photos and video). Expensive high-performance block or file storage can no longer economically hold this volume of data, nor is it advantageous to do so. This shift in datasets is bringing with it new requirements for accessibility and distribution—flipping the storage paradigm upside down. If the definition of Tier 1 is based on importance or value to the organization, then for many data-driven organizations, object storage will become the new Tier 1.
Databases or analytical environments (SPARK, HDFS, etc.) are considered transient destinations to run analysis, and cloud compute is ideal for processing data sets. However, once results are obtained, data sets are often deleted because of ongoing storage costs. In addition, web applications now need to service millions of distributed customers. The impact on the infrastructure? Expensive file or block storage (with single-file write or block-write high-performance) has now become the temporary target for analyzing data; and, object storage that was designed to handle massive throughput has become the more permanent environment that supports storage and management of ever-expanding and distributed data sets.
To remain competitive and manage the costs of storage, you need to go with the only infrastructure that can absorb the massive influx of data, which is object or scale-out storage. Object storage is capable of handling many connections, writing in parallel, providing massive throughput, consolidating data from distributed sources leveraging the web, and handling many protocol inputs (for seamless integration with analysis and cloud platforms). This provides the most cost-effective platform for data-driven organizations.
The management of data should be thought of in terms of “gathering,” “cataloging,” “analyzing,” “annotating” and “distributing.” A proper object storage platform with built-in metadata management leveraging NoSQL infrastructure can serve four of of these five functions. With the many parallel options for inputting and outputting data, object storage can readily place data into temporary compute space for analysis. The output can be stored with rich metadata and the data process can have its metadata annotated as necessary to constantly improve the organization of your corporate data resource.
Object storage optimizes your compute infrastructure and costs by allowing the sharing of these resources or leveraging the temporary compute infrastructure that the cloud provides. So, if object storage provides an instantly accessible platform for your data that reduces your current storage TCO while enabling analysis and distribution in an elastic fashion that you can quickly scale up and scale down, then I argue that object storage should be the new Tier 1 for any organization that is serious about extracting value from continued access to its unstructured data sets.
The value of the cloud and mobile devices is undeniable. The elastic resources provided and the ability to create and access data from any location has changed society. However, as with any disruptive technological movement, it presents new challenges. Privacy concerns run rampant and laws and regulations that have struggled to keep up are now being retrofitted to existing workflows—testing the limits of most information governance and IT execs. Unless you have been avoiding email, you have undoubtedly received a General Data Protection Regulation (GDPR) compliance request of some sort. Why? Because penalty fees for non-compliance can go all the way up to $20M euros! So, how do you enable information governance in the cloud age?
This is a topic we are going to explore in our upcoming webinar titled Enabling Information Governance for Rapidly Scaling Data Sets. Our presentation will focus on solving 3 pain points:
- Where is the sensitive information?
- Who owns it?
- How do we protect the organization for its data?
In collaboration with our longtime partner and customer NetGovern (formerly Netmail), we will present a high-level framework to help you develop your information governance approach. In addition, we will show how NetGovern integrates Caringo Swarm, leveraging our secure software appliance in a way that is completely transparent to administrators and end users.
Expect this webinar to be a bit different from some of our more technical webinars. Jacques Sauvé, Information Governance Professional (IGP) and Director Partner Enablement at NetGovern, and I will focus on helping any executive new to information governance understand the steps in electronic discovery and the importance of providing a secure archive. We hope you can join us live on May 17 at 10 AM Pacific so that you have the chance to ask questions.
The webinar will also be available on demand shortly after the broadcast.
Last week, I had the unique opportunity to attend and speak at the Salishan Conference on High Speed Computing at the Salishan Lodge on Gleneden Beach, Oregon. The conference was founded in 1981 and this year’s theme was “Maximizing Return on Investment for HPC in a Changing Computing Landscape,” with the majority of attendees hailing from Los Alamos, Lawrence Livermore, and Sandia National Laboratories. From many of the talks, I got the distinct impression that the topic of ROI may have been an undercurrent of virtually all 27 past conferences. Given the nature of the computational simulations they are undertaking, it’s no wonder that much of the applications and hardware are quite distinctive to this space with somewhat limited applicability elsewhere. In fact, I learned from one of the attendees from D-Wave Systems that there are only 3 of their Quantum computers in production at customer sites, and all of those customers also attended the conference.
Setting aside the hardware though, they seem to have made some great strides on the software side, leveraging DoE budgets to develop open source software, and even creating communities around some of those projects. But here’s where it gets a bit sticky, as was evidenced by Dan Stanzione’s presentation, “A University HPC Center Perspective on HPC and Cloud Providers.” He concluded that while HPC centers and Cloud Providers potentially share some similar use cases, for the most part, you wouldn’t use an HPC system to run your company’s email, nor would you use a cloud service provider’s HPC service if you truly required highly optimized, high-performance simulations like many of the attendees conduct on a daily basis.
A great example on the need to optimize (customize) software for the hardware to squeeze every last drop of performance was presented by Andrew Connolly from the University of Washington, “Surveying the Sky with LSST: Software as the Instrument of the Next Decade.” Over the first ten years of its lifetime, this new generation of telescopes will survey half of the sky in six optical colors, discovering 37 billion stars and galaxies and detecting about 10 million variable sources every night. The telescope will gather 15 Terabytes per night and will release over 12 Petabytes of queryable data annually.
So, what does all this have to do with object storage? Well, in addition to object storage as an economical back-end archive for all this simulation data that is being generated (as is the case at Argonne National Laboratory), my talk on whether object storage could actually replace parallel file systems for read-intensive HPC workloads (which is, in fact, what is happening in phase 4 of the JASMIN project with our customer Rutherford Appleton Laboratory—more on that at another time), seemed to resonate with much of the audience. And, it spawned some internal debate on whether there could be a reduced need for POSIX front-ends to back-end object stores. This is, of course, a debate that will play out over time and is yet another example of the tension between rewriting applications to take advantage of the latest hardware (or software for that matter) versus running more simulations and analysis with the existing software. Big trade-offs to think about…which was the entire point of the conference.
I’d like to extend my appreciation to the organizers of the Salishan Conference for the opportunity to speak and to learn about the challenges still facing the individuals and teams in this important industry, and invite you to contact us if you have questions about role object storage in high performance computing.
While NAB stands for National Association of Broadcasters, the attendees in Las Vegas at the 95th NAB Show earlier this month represented a far broader audience than traditional broadcasters. From houses of worship, education, government, defense contractors, and the more obvious creators of television, movies, documentaries, music videos and news agencies, attendees came by to discuss use cases that involve rapidly scaling media libraries that need to remain instantly accessible.
Seeing “The Miracle Season” movie this week and a story on the “Bobby Kennedy for President” Documentary Series that will be airing on Netflix this coming weekend, remind me of just how often actual footage and artistic recreations are blended to tell stories and create art. Without the technology to keep yesterday’s and today’s unrelenting explosion of digital video accessible, we would quickly lose a wealth of information and history. “Moving pictures” date back to the 1890s, and the challenge for archivists and organizations around the world is how to safely archive those “moving pictures” so they can protect that history and keep it accessible.
From government and law enforcement agencies to local news stations and major television and motion pictures, we hear the same pain points of how difficult it is not just to store this type of data, but how trying it can be to find what you need years later. This is just the problem that Caringo set out to solve in 2005, as we pioneered the concept of Content Addressable Storage (CAS). Throughout the past 13 years, many organizations have turned to the experts at Caringo to provide the technology that enables them to meet their specific requirements, as our VP of Product Tony Barbagallo expounded upon last summer in the blog How Object Storage Meets Vertical Market Requirements.
Tony discussed in depth why object storage is a smart alternative to traditional storage systems (such as file-based storage systems, SAN and NAS) as well as the requirements that lead organizations to consider object storage solutions. Most importantly, he explained how Swarm Object Storage rises to meet those challenges. As our list of customers has grown, we’ve not only field-hardened our solution, we’ve expanded our product line and amassed a wealth of best practices that we use to help our customers implement the right solution to protect their data and support their business objectives. In fact, we actually help our customers turn storage into a competitive advantage. To learn more, I invite you to read this article recently published in the MESA Winter Journal.
Have questions? Our experts can help. Contact us and learn how Caringo Solutions can enable our business.
The post Merging Past & Present: NAB Show, M&E and Object Storage appeared first on Caringo.
Does anyone remember the children’s book The Hungry Thing? It’s a simple story about a Hungry Thing coming to town, sitting on his tail and pointing to a sign around his neck that says, “Feed Me.” He asks the townspeople for “Shmancakes,” which any smart preschooler knows rhymes with pancakes and so goes the story. Why do I bring this up? Well, because Swarm can feed data to other “hungry” clusters for collaboration and disaster recovery.
Feeds in Swarm is the name of our object routing mechanism that simply uses your internet connection to distribute data to and from a source cluster to a destination cluster. Now there are two other reasons for Feeds in Swarm, but we’ll talk about them later.
Swarm Storage protects against various disk failures and other hardware failures that might take out a machine, but it can’t protect against a true disaster like a flood. Feeds enables that protection by making copies of your data elsewhere. What gets replicated is a high fidelity copy of the complete object, metadata and all, so it’s accessible and usable in any cluster that the object resides. Feeds provide a backup and disaster recovery solution for environments with a network connection between the source and target cluster — the internet works quite well for Feeds. In these environments, feeds operate continuously in the background to keep up with source cluster intake. When Swarm recognizes new or updated objects in a domain that has been configured to be replicated, it copies these objects to an internal queue for transport.
Replication can be as simple or complex as you required. You can use Feeds to create an offsite DR cluster or even create n-way replication for collaboration and data locality. I will also mention that data is replicated on a domain-by-domain basis, so you can choose what data to replicate and to where. Check out the diagram for a couple of examples:
And you can even monitor your feeds from the Swarm UI:
I mentioned two other uses for earlier and here they are:
First, Swarm also uses feeds to speed up searching through objects’ metadata. Metadata Search provides real-time metadata indexing and ad-hoc search capabilities within Swarm by name or metadata. The integrated Elasticsearch service (view this on-demand webinar for more on Elasticsearch) collects the metadata for each object and updates the search database in your Swarm network. When you update an object’s metadata or create a new object, domain, or bucket, the service collects only the metadata and not the actual content. Once metadata is indexed, I can search through all of the metadata in the cluster, both system-defined as well as custom metadata. If my cluster consists of surveillance video for instance, I can create a search to identify all the surveillance videos from the back parking lot of corporate headquarters for the last 24 hours. Watch this short video to learn more about metadata and how it is used in Swarm object storage.
Second, we took the replication capability of Feeds, leveraging our original methodology of sending data between clusters, and have extended that to Azure Blob storage. With Feeds, you can now replicate objects on a domain-by-domain basis to native Azure blobs. Once the data is on Azure, you can leverage Azure’s compute, data-protection and long-term archive services. All data that remains on-premise is protected and managed by Caringo Swarm.
In other words, Feeds satisfies any data “hungry” process and is a robust, standard feature of Swarm enabling replication for collaboration, disaster recovery and search indexing. Any version of Swarm over X.X supports Feeds and it is just one of the many features that are standard. To learn more about some of these unique features you I recommend reading ‘Emergent Behavior: The Smarts of Swarm’ and if you have any questions don’t hesitate to contact us.
Today, we announced a technology partnership with Square Box Systems, who recently certified Caringo Swarm with their CatDV media asset management (MAM) suite. I wanted to share a few of my thoughts from the NAB Show floor, where we we’ve been meeting many talented and smart IT and Creative Professionals from media and entertainment (M&E) as well as government organizations, educational institutions and houses of worship.
Our technology partnership with Square Box Systems is part of our ongoing mission to integrate with best-of-breed solutions and to simplify workflows and the underlying storage infrastructure. As media libraries continue to scale at an impressive rate, our object storage solution is an ideal way to bring security and accessibility that evolves with content creation and viewer consumption patterns.
“What Happens in Vegas, Stays in Vegas” may be the most famous marketing slogan ever attached to a city, and certainly represents the ambiance of “Sin City.” Since 1991, professionals from the Media & Entertainment (M&E) industry as well as other verticals that produce, store or distribute video have converged at the National Association of Broadcasters Show (NAB Show) in Las Vegas.
Now in it’s 95th year, the NAB show is billed as the “ultimate event for the media, entertainment and technology industry, showcasing ground-breaking innovation and powerful solutions for professionals looking to create, manage, deliver and monetize content on any platform.” For those of us in the high tech world with products that are used in M&E, the NAB Show is the place to be next week. Our Caringo crew is preparing to head to Vegas, and can be found first at the JB&A Pre-NAB Technology Event, where we will preview how organizations can leverage the benefits of hassle-free scale while enabling direct streaming, S3 support and protecting their media libraries from file system exploits that lead to hacks and ransomware events.
Then, we head to the NAB Show expo (booth SL11807) where we will be hosting happy hour on Monday, April 9, and Tuesday, April 10, starting at 1:30 p.m. Come by for a cold brew and a demo, where our engineers can show you Swarm and explain how it provides limitless scale, the ability for just 1 administrator to literally manage hundreds of petabytes, and built-in protection from hacks and ransomware.
Swarm does this all the while reducing storage total cost of ownership (TCO) to the point where it is almost sinful (often up to 75%). This makes Swarm an ideal as a target for media movers and asset managers such as Marquis Project Parking, Pixit PixStor/Ngenea, CatDV and ReachEngine. Read Turning Your Storage into a Competitive Advantage, originally published in Broadcast Beat, to learn more.
Next week, what happens in Vegas won’t stay in Vegas. The knowledge gained, the connections and memories made, and swag like the infamous Caringo light-up yo-yos will return with attendees to the far reaches of the globe. And, for a limited time, Caringo is offering a no-charge, full-featured 100 TB Swarm software license for qualified M&E firms (including but not limited to recording studios, content creation and post-production houses, broadcasters, and studios). Stop by our booth to learn more or visit https://www.caringo.com/media-entertainment/. You can also email us at info@Caringo.com.
The post What Happens in Vegas, Stays in Vegas…Except Object Storage appeared first on Caringo.