This is a simple example of federated gateways config to make an asynchonous replication between two Ceph clusters. Continue reading
This option allows you to answer a fairly constant worry in the case of heterogeneous cluster. Indeed, all the discs do not have the same performance or not the same ratio performance / size. Continue reading
The use case is simple, I want to use both SSD disks and SATA disks within the same machine and ultimately create pools pointing to SSD or SATA disks. In order to achieve our goal, we need to modify the CRUSH map. My example has 2 SATA disks and 2 SSD disks on each host and I have 3 hosts in total. Continue reading
It is not always easy to know how to organize your data in the Crushmap, especially when trying to distribute the data geographically while separating different types of discs, eg SATA, SAS and SSD. Let’s see what we can imagine as Crushmap hierarchy. Continue reading
You have probably already be faced to migrate all objects from a pool to another, especially to change parameters that can not be modified on pool. For example, to migrate from a replicated pool to an EC pool, change EC profile, or to reduce the number of PGs… There are different methods, depending on the contents of the pool (RBD, objects), size… Continue reading
Calamari is a web-based monitoring and management for Ceph. In this post we will install Calamari on a working ceph cluster. Calamari node and all Ceph nodes are running ubuntu 14.04. We will use ceph-deploy utility to install packages. This article is just for test purposes and give you an idea about Calamari installation. Continue reading
Ceph is a widely used open source storage platform. It provides high performance, reliability, and scalability. The Ceph free distributed storage system provides an interface for object, block, and file-level storage. Ceph is build to provide a distributed storage system without a single point of failure.
In this tutorial, I will guide you to install and build a Ceph cluster on CentOS 7. A Ceph cluster requires these Ceph components:
- Ceph OSDs (ceph-osd) – Handles the data store, data replication and recovery. A Ceph cluster needs at least two Ceph OSD servers. I will use three CentOS 7 OSD servers here.
- Ceph Monitor (ceph-mon) – Monitors the cluster state, OSD map and CRUSH map. I will use one server.
- Ceph Meta Data Server (ceph-mds) – This is needed to use Ceph as a File System. Continue reading