Skip to main content

Your submission was sent successfully! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates from Canonical and upcoming events where you can meet our team.Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

  1. Blog
  2. Article

Alex Chalkias
on 1 April 2021

Ceph Pacific 16.2.0 is now available


April 1st 2021 – Today, Ceph upstream released the first stable version of ‘Pacific’, a full year after the last stable release ‘Octopus’. Pacific focuses on usability and cross-platform integrations, with exciting features such as iSCSI and NFS promoted to stable or major dashboard enhancements. This makes it easier to integrate, operate and monitor Ceph as a unified storage system. Ceph packages are built for Ubuntu 20.04 LTS and Ubuntu 21.04  to ensure a uniform experience across clouds.

You can try the Ceph Pacific beta by following these instructions, and your deployment will automatically upgrade to the final release as soon as it’s made available from Canonical. 

What’s new in Ceph Pacific?

As usual, the Ceph community grouped the latest enhancements into five different themes, which are in descending order of significance for changes in usability, quality, performance, multi-site usage and ecosystem & integrations.

Usability

The highlight of Pacific is the cross-platform availability of Ceph with a new native Windows RBD driver and the iSCSI and NFS gateways becoming stable. These allow a wide variety of platforms to take advantage of Ceph: from your Linux native workloads to your VMware clusters to your Windows estate, you can leverage scalable software-defined storage to drive infrastructure costs down. 

It is also worth mentioning that the Ceph dashboard now includes all core Ceph services and extensions – i.e. object, block, file, iSCSI, NFS Ganesha – as it evolves to a robust and responsive management GUI in front of the Ceph API. It also provides new observability and management capabilities to manage the Ceph OSDs, multisite deployments, enforce RBAC, define security policies and more.

A new host maintenance mode reduces unexpected outages, as the cluster is informed when a node is about to go under maintenance. Cephadm, the orchestrator module, got a new exporter/agent mode that increases performance when monitoring large clusters. Other notable usability enhancements in Pacific include a simplified status output and progress bar for the cluster recovery processes, MultiFS marked stable, and the MDS-side encrypted file support in CephFS.

Quality

RADOS is, as usual, the focal point when it comes to quality improvements to make Ceph more robust and reliable. Placement groups can be deleted significantly faster, and this has a smaller impact on client workloads. On CephFS, a new feature bit allows turning required file system features on or off, preventing any older clients that do not support required features from being rejected. Lastly, enhanced public dashboards based on Ceph’s telemetry feature are now available, giving users insights about the use of Ceph clusters and storage devices in the wild, helping drive data-based design and business decisions.

Performance

The RADOS Bluestore backend now supports RocksDB sharding to reduce disk space requirements, a hybrid allocator lowers memory use and disk fragmentation, and work was done to bring finer-grained memory tracking. The use of mclock scheduler and extensive testing on SSDs helped improve QoS and system performance. Ephemeral pinning, improved cache management and asynchronous unlink/create improve performance, scalability and reduce unnecessary round trips to the MDS for CephFS.

Ceph Crimson got a prototype for the new SeaStore backend, alongside a compatibility layer to the legacy BlueStore backend. New recovery, backfill and scrub implementations are also available for Crimson with the Pacific release. Ceph Crimson is the project to rewrite the Ceph OSD module to better support persistent memory and fast NVMe storage.

Multi-site

The snapshot-based multi-site mirroring feature in CephFS means automatic replication of any snapshot from a source cluster to remote clusters that are bigger than any directory. Similarly, the pre-bucket multi-site replication feature in RGW, which was given significant stability enhancements, allows for async data replication at a site or zone level while federating multiple sites at once.

Ecosystem & integrations

Enhancing the user experience while onboarding to Ceph is the focus of the ecosystem theme, with ongoing projects to revamp the documentation and the ceph.io website while removing instances of racially charged terms. Support of ARM64 is also in progress with new CI, release builds and testing workflows and Pacific will be the first Ceph release to be available, although initially with limited support, on ARM.

On the integrations front, Rook is now able to operate stretch clusters in two datacenters with a MON in a third location and can manage CephFS mirroring using CRDs. The container storage interface allows Openstack Manila to integrate with container and cloud platforms bringing enhanced management and security capabilities in CephFS and RBD. 

Ceph Pacific available on Ubuntu

Try Ceph Pacific now on Ubuntu to combine the benefits of a unified storage system with a secure and reliable operating system. You can install the Ceph Pacific beta from the OpenStack Wallaby Ubuntu Cloud Archive for Ubuntu 20.04 LTS or using the development version of Ubuntu 21.04 (Hirsute Hippo).

Canonical supports all Ceph releases as part of the Ubuntu Advantage for Infrastructure enterprise support offering. Canonical’s Charmed Ceph packages the upstream Ceph images in wrappers called charms, which add lifecycle automation capabilities thus significantly simplifying Ceph deployments and day-2 operations, thanks to the Juju model-driven framework. Charmed Ceph Pacific will be released in tandem with the Canonical Openstack Wallaby release in late April 2021. 

Learn more about Canonical Ceph storage offerings

Suggested resources for Ceph

Related posts


Philip Williams
19 November 2024

Meet the Canonical Ceph team at Cephalocon 2024

Ceph Article

Date: December 4-5th, 2024 Location: Geneva, Switzerland In just a few weeks, Cephalocon will be held at CERN in Geneva. After last year’s successful Cephalocon in Amsterdam, which was the first live event held since the pandemic, it is great to return to regular community gatherings . Canonical Ubuntu is proud to be sponsoring the ...


Canonical
15 November 2024

Canonical announces the first MicroCloud LTS release 

Cloud and server Article

Canonical announces the first MicroCloud LTS release. MicroCloud 2.1.0 LTS features support for single-node deployments, improved security posture, and more flexibility during the initialization process. ...


Philip Williams
16 August 2024

Managed storage with Ceph

Ceph Article

Treat your open source storage infrastructure as a service What if storage was like coffee: menu driven and truly service oriented? Everyone knows how quick and easy it is to order a cappuccino or cortado and have a friendly barista bring it to you in just minutes. Now imagine this is a user who needs ...