kubernetes 使用ceph

下载external storage项目

git clone https://github.com/kubernetes-incubator/external-storage.git

kubernetes 使用 cephfs

进入cephfs所在的目录

cd external-storage/ceph/cephfs/
ls

此目录下应该存在如下文件:

cephfs_provisioner  cephfs-provisioner.go  CHANGELOG.md  deploy  Dockerfile  Dockerfile.release  example  local-start.sh  Makefile  OWNERS  README.md

其中deploy文件夹下用于部署provisioner,example文件夹中提供了使用cephfs的样例

部署cephfs 的provisioner

  1. 使用rbac的方式部署。修改deployment文件
cd deploy/rbac
vi deployment.yaml

添加 spec.template.spec.hostNetworkspec.template.spec.tolerations,修改spec.template.spec.containers.image:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cephfs-provisioner
namespace: cephfs
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: cephfs-provisioner
spec:
hostNetwork: true
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
containers:
- name: cephfs-provisioner
image: "s7799653/cephfs-provisioner:latest"
env:
- name: PROVISIONER_NAME
value: ceph.com/cephfs
- name: PROVISIONER_SECRET_NAMESPACE
value: cephfs
command:
- "/usr/local/bin/cephfs-provisioner"
args:
- "-id=cephfs-provisioner-1"
serviceAccount: cephfs-provisioner
  1. 创建cephfs命名空间
kubectl create ns cephfs
  1. 部署provisioner到kubernetes
kubectl apply -f ./

在kubernetes中使用cephfs

  1. example文件夹下新建一个secret.yaml,并创建secret,内容如下
apiVersion: v1
kind: Secret
type: "kubernetes.io/rbd"
metadata:
name: ceph-secret
data:
#ceph auth get-key client.admin | base64
key: ************

创建

kubectl apply -f example/secret.yaml -n cephfs
  1. 修改example.class.yaml,并创建StorageClass

parameters.monitors修改为ceph的mon节点ip,多个mon使用逗号(,)分割.
adminId改为admin
adminSecretName改为ceph-secret
adminSecretNamespace改为cephfs

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: cephfs
provisioner: ceph.com/cephfs
parameters:
monitors: 10.10.7.52:6789
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: "cephfs"
claimRoot: /pvc-volumes

创建

kubectl apply -f example/class.yaml -n cephfs
  1. 创建测试的pvc和pod

修改example/claim.yaml中的metadata.nameclaim1-cephfs,修改example/test-pod.yaml中的spec.containers.imagebusybox:1.24以及spec.volumes[0].persistentVolumeClaim.claimNameclaim1-cephfs

example/claim.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: claim1-cephfs
spec:
storageClassName: cephfs
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi

example/test-pod.yaml

kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: busybox:1.24
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: pvc
persistentVolumeClaim:
claimName: claim1-cephfs

部署

kubectl apply -f example/claim.yaml
kubectl apply -f example/test-pod.yaml
kubectl get po
kubectl get pvc

如果看到pod.test-podSTATUSCompleted以及pvc.claim1-cephfsSTATUSBound就表明部署成功了。
正常之后即可删除刚刚创建的资源

kubectl delete -f example/claim.yaml
kubectl delete -f example/test-pod.yaml

kubernetes 使用 ceph-rbd

进入rbd所在的目录

cd external-storage/ceph/rbd/
ls

此目录下应该存在如下文件:

CHANGELOG.md  cmd  deploy  Dockerfile  Dockerfile.release  examples  local-start.sh  Makefile  OWNERS  pkg  README.md

其中deploy文件夹下用于部署provisioner,example文件夹中提供了使用ceph-rbd的样例

部署ceph-rbd的provisioner

  1. 使用rbac的方式部署。修改部分文件
cd deploy/rbac

deployment.yaml:

添加 spec.template.spec.hostNetworkspec.template.spec.tolerations以及挂载本地/etc/ceph,修改spec.template.spec.containers.image:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: rbd-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: rbd-provisioner
spec:
hostNetwork: true
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
containers:
- name: rbd-provisioner
image: "s7799653/rbd-provisioner:latest"
env:
- name: PROVISIONER_NAME
value: ceph.com/rbd
volumeMounts:
- mountPath: /etc/ceph
name: ceph-config
serviceAccount: rbd-provisioner
volumes:
- name: ceph-config
hostPath:
path: /etc/ceph/

clusterrolebinding.yaml:

修改subjects.namespacecephfs

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: cephfs
roleRef:
kind: ClusterRole
name: rbd-provisioner
apiGroup: rbac.authorization.k8s.io

rolebinding.yaml:

修改subjects.namespacecephfs:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rbd-provisioner
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: rbd-provisioner
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: cephfs
  1. 创建cephfs的namespace(如果有可以略过这一步,命名为cephfs是为了把关于ceph的都放到同一个namespace下。)
kubectl create ns cephfs
  1. 部署provisioner到kubernetes
kubectl apply -f ./ -n cephfs

在kubernetes中使用ceph-rbd

  1. 修改examples中的secrets.yaml,并创建secret

如果进行过cephfs的配置,此步骤可以省略
这里是只用ceph admin一个用户的配置。

apiVersion: v1
kind: Secret
type: "kubernetes.io/rbd"
metadata:
name: ceph-secret
data:
#ceph auth get-key client.admin | base64
key: ************

创建

kubectl apply -f examples/secrets.yaml -n cephfs
  1. 修改examples中的class.yaml,并创建StorageClass

这里是只用ceph admin一个用户的配置

metadata.name修改为ceph-rbd
parameters.monitors修改为ceph的mon节点ip,多个mon使用逗号(,)分割.
adminId改为admin
adminSecretName改为ceph-secret
adminSecretNamespace改为cephfs
pool改为rbd(根据实际情况,通过ceph osd pool ls查看ceph有那些pool,通过ceph osd pool create rbd 128&&ceph osd pool application enable rbd rbd创建rbd pool)
userId改为admin
userSecretName改为ceph-secret
userSecretNamespace改为cephfs

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ceph-rbd
provisioner: ceph.com/rbd
parameters:
monitors: 10.10.7.52:6789
pool: rbd
adminId: admin
adminSecretNamespace: cephfs
adminSecretName: ceph-secret
userId: admin
userSecretNamespace: cephfs
userSecretName: ceph-secret
imageFormat: "2"
imageFeatures: layering

创建

kubectl apply -f examples/class.yaml -n cephfs
  1. 创建测试的pvc和pod

修改examples/claim.yaml中的metadata.nameclaim1-rbd以及spec.storageClassNameceph-rbd,修改examples/test-pod.yaml中的spec.containers.imagebusybox:1.24以及spec.volumes[0].persistentVolumeClaim.claimNameclaim1-rbd

examples/claim.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: claim1-rbd
spec:
accessModes:
- ReadWriteOnce
storageClassName: ceph-rbd
resources:
requests:
storage: 1Gi

examples/test-pod.yaml

kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: busybox:1.24
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: pvc
persistentVolumeClaim:
claimName: claim1-rbd

部署

kubectl apply -f examples/claim.yaml
kubectl apply -f examples/test-pod.yaml
kubectl get po
kubectl get pvc

如果看到pod.test-podSTATUSCompleted以及pvc.claim1-rbdSTATUSBound就表明部署成功了。
正常之后即可删除刚刚创建的资源

kubectl delete -f examples/claim.yaml
kubectl delete -f examples/test-pod.yaml

Author: jxin

发表评论

电子邮件地址不会被公开。 必填项已用*标注