Ceph v10.2.0 Jewel 发布了,Ceph是加州大学Santa Cruz分校的Sage Weil(DreamHost的联合创始人)专为博士论文设计的新一代自由软件分布式文件系统。自2007年毕业之后,Sage开始全职投入到Ceph开 发之中,使其能适用于生产环境。Ceph的主要目标是设计成基于POSIX的没有单点故障的分布式文件系统,使数据能容错和无缝的复制。2010年3 月,Linus Torvalds将Ceph client合并到内 核2.6.34中。IBM开发者园地的一篇文章 探讨了Ceph的架构,它的容错实现和简化海量数据管理的功能。
改进日志:
CephFS:
This is the first release in which CephFS is declared stable and production ready! Several features are disabled by default, including snapshots and multiple active MDS servers.
The repair and disaster recovery tools are now feature-complete.
A new cephfs-volume-manager module is included that provides a high-level interface for creating “shares” for OpenStack Manila and similar projects.
There is now experimental support for multiple CephFS file systems within a single cluster.
RGW:
The multisite feature has been almost completely rearchitected and rewritten to support any number of clusters/sites, bidirectional fail-over, and active/active configurations.
You can now access radosgw buckets via NFS (experimental).
The AWS4 authentication protocol is now supported.
There is now support for S3 request payer buckets.
The new multitenancy infrastructure improves compatibility with Swift, which provides a separate container namespace for each user/tenant.
The OpenStack Keystone v3 API is now supported. There are a range of other small Swift API features and compatibility improvements as well, including bulk delete and SLO (static large objects).
RBD:
There is new support for mirroring (asynchronous replication) of RBD images across clusters. This is implemented as a per-RBD image journal that can be streamed across a WAN to another site, and a new rbd-mirror daemon that performs the cross-cluster replication.
The exclusive-lock, object-map, fast-diff, and journaling features can be enabled or disabled dynamically. The deep-flatten features can be disabled dynamically but not re-enabled.
The RBD CLI has been rewritten to provide command-specific help and full bash completion support.
RBD snapshots can now be renamed.
RADOS:
BlueStore, a new OSD backend, is included as an experimental feature. The plan is for it to become the default backend in the K or L release.
The OSD now persists scrub results and provides a librados API to query results in detail.
We have revised our documentation to recommend against using ext4 as the underlying filesystem for Ceph OSD daemons due to problems supporting our long object name handling.
完整的发布说明,可以在这里查看。