¿ù°£ Àα⠰Խù°

°Ô½Ã¹° 1,358°Ç
   
[CEPH] INSTALL
±Û¾´ÀÌ : ÃÖ°í°ü¸®ÀÚ ³¯Â¥ : 2015-03-25 (¼ö) 15:43 Á¶È¸ : 5087
±ÛÁÖ¼Ò :
                                
±¸¼º
 OS : Ubuntu 14.04.2 LTS X86_64
 CEPH Version 0.87.1
 - mon 3 : mon0, mon1 mon2 
 - osd 3 : osd0, osd1, osd2
 - mgmt  : mgmt
 - mds : mds0

1. /etc/hosts 
115.XXX.XXX.63 mgmt
115.XXX.XXX.64 mds0
115.XXX.XXX.60 mon0
115.XXX.XXX.61 mon1 
115.XXX.XXX.62 mon2
115.XXX.XXX.205 osd0
115.XXX.XXX.207 osd1
115.XXX.XXX.211 osd2


2. °¢ ³ëµåµé hostname ¼öÁ¤ 


3. ceph À¯Àú»ý¼º
# sudo useradd -d /home/ceph -m ceph
# sudo passwd ceph
# echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
# sudo chmod 0440 /etc/sudoers.d/ceph


4. SSH ÀÎÁõÅ° (mgmt ¼­¹ö¿¡¼­ Æнº¿öµå ¾øÀÌ °¢³ëµåµé ÄÁÆ®·Ñ)
¸ðµç ³ëµåµé ÁøÇà
ceph@mgmt:~$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ceph/.ssh/id_rsa): 
Created directory '/home/ceph/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/ceph/.ssh/id_rsa.
Your public key has been saved in /home/ceph/.ssh/id_rsa.pub.
The key fingerprint is:
db:e3:92:73:75:55:xx:a0:b1:24:ac:d3:93:0e:9e:2a ceph@mgmt
The key's randomart image is:
+--[ RSA 2048]----+
|         .. o .. |
|          .o +  +|
|         o .o   +|
|        + +     .|
|       .S= .   . |
|        oo. . .  |
|       ...o. .   |
|    E . +...     |
|     .   +.      |
+-----------------+


ceph@mgmt:~$ ssh-copy-id ceph@mon0
The authenticity of host 'mon0 (115.XXX.XXX.60)' can't be established.
ECDSA key fingerprint is 62:50:68:XX:b5:25:0a:35:4c:43:4a:e8:7c:88:8d:75.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
ceph@mon0's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'ceph@mon0'"
and check to make sure that only the key(s) you wanted were added.


ceph@mgmt:~$ ssh ceph@mon0
Welcome to Ubuntu 14.04.2 LTS (GNU/Linux 3.13.0-46-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Tue Mar 24 16:46:09 KST 2015

  System load:  0.0               Processes:           78
  Usage of /:   1.2% of 98.30GB   Users logged in:     1
  Memory usage: 3%                IP address for eth0: 115.XXX.XXX.60
  Swap usage:   0%

  Graph this data and manage this system at:
    https://landscape.canonical.com/


5. mgmt ceph ÀúÀå¼Ò ¼³Á¤
ceph@mgmt:~$ wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
OK

ceph@mgmt:~$ echo deb http://ceph.com/debian-giant/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
deb http://ceph.com/debian-giant/ trusty main

ceph@mgmt:~$ cat /etc/apt/sources.list.d/ceph.list 
deb http://ceph.com/debian-giant/ trusty main

ceph@mgmt:~$ sudo apt-get update

ceph@mgmt:~$ sudo apt-get install ceph-deploy


6. Setting up ceph
ceph@mgmt:~$ mkdir ~/cephcluster
ceph@mgmt:~$ cd ~/cephcluster
ceph@mgmt:~/cephcluster$ ceph-deploy new mon0 mon1 mon2
ceph@mgmt:~/cephcluster$ ls
ceph.conf  ceph.log  ceph.mon.keyring
ceph@mgmt:~/cephcluster$ cat ceph.conf 
[global]
fsid = b21cc0e9-833c-4de0-8784-38bee7bee576
mon_initial_members = mon0, mon1, mon2
mon_host = 115.XXX.XXX.60,115.XXX.XXX.61,115.XXX.XXX.62
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
public network = 115.xxx.0.xxx/24
osd pool default size = 2

¸ðµç ³ëµå¿¡  ceph ¼³Ä¡
ceph@mgmt:~/cephcluster$ ceph-deploy install mon0 mon1 mon2 osd0 osd1 osd2 mds0 mgmt

keyring Ãß°¡ 
ceph@mgmt:~/cephcluster$ ceph-deploy mon create-initial
ceph@mgmt:~/cephcluster$ ls
ceph.bootstrap-mds.keyring  ceph.bootstrap-osd.keyring  ceph.client.admin.keyring  ceph.conf  ceph.log  ceph.mon.keyring

osd ³ëµå µð½ºÅ©¸¦ XFS ÇüÅ·ΠÆ÷¸Ë
ceph@mgmt:~/cephcluster$ ceph-deploy disk zap osd0:sda osd1:sde osd2:sda
ceph@mgmt:~/cephcluster$ ceph-deploy disk list osd0 osd1 osd2
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.22): /usr/bin/ceph-deploy disk list osd0 osd1 osd2
[osd0][DEBUG ] connection detected need for sudo
[osd0][DEBUG ] connected to host: osd0
[osd0][DEBUG ] detect platform information from remote host
[osd0][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Listing disks on osd0...
[osd0][DEBUG ] find the location of an executable
[osd0][INFO  ] Running command: sudo /usr/sbin/ceph-disk list
[osd0][DEBUG ] /dev/sda other, unknown
[osd0][DEBUG ] /dev/sdb :
[osd0][DEBUG ]  /dev/sdb1 other, ext4, mounted on /
[osd0][DEBUG ] /dev/sdc other, unknown
[osd0][DEBUG ] /dev/sdd other, unknown
[osd0][DEBUG ] /dev/sde other, unknown
[osd0][DEBUG ] /dev/sdf other, unknown
[osd1][DEBUG ] connection detected need for sudo
[osd1][DEBUG ] connected to host: osd1 
[osd1][DEBUG ] detect platform information from remote host
[osd1][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Listing disks on osd1...
[osd1][DEBUG ] find the location of an executable
[osd1][INFO  ] Running command: sudo /usr/sbin/ceph-disk list
[osd1][DEBUG ] /dev/sda other, ext4
[osd1][DEBUG ] /dev/sdb other, ddf_raid_member
[osd1][DEBUG ] /dev/sdc other, ddf_raid_member
[osd1][DEBUG ] /dev/sdd other, ddf_raid_member
[osd1][DEBUG ] /dev/sde other, unknown
[osd1][DEBUG ] /dev/sdf :
[osd1][DEBUG ]  /dev/sdf1 other, ext4, mounted on /
[osd2][DEBUG ] connection detected need for sudo
[osd2][DEBUG ] connected to host: osd2 
[osd2][DEBUG ] detect platform information from remote host
[osd2][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Listing disks on osd2...
[osd2][DEBUG ] find the location of an executable
[osd2][INFO  ] Running command: sudo /usr/sbin/ceph-disk list
[osd2][WARNIN] WARNING:ceph-disk:Old blkid does not support ID_PART_ENTRY_* fields, trying sgdisk; may not correctly identify ceph volumes with dmcrypt
[osd2][DEBUG ] /dev/sda other, unknown
[osd2][DEBUG ] /dev/sdb other, ext4
[osd2][DEBUG ] /dev/sdc :
[osd2][DEBUG ]  /dev/sdc1 other, zfs_member
[osd2][DEBUG ]  /dev/sdc2 other
[osd2][DEBUG ] /dev/sdd :
[osd2][DEBUG ]  /dev/sdd1 other, zfs_member
[osd2][DEBUG ]  /dev/sdd9 other, 6a945a3b-1dd2-11b2-99a6-080020736631
[osd2][DEBUG ] /dev/sde :
[osd2][DEBUG ]  /dev/sde1 other, zfs_member
[osd2][DEBUG ]  /dev/sde9 other, 6a945a3b-1dd2-11b2-99a6-080020736631
[osd2][DEBUG ] /dev/sdf :
[osd2][DEBUG ]  /dev/sdf1 other, ext4, mounted on /

ceph@mgmt:~/cephcluster$ ceph-deploy osd prepare osd0:sda osd1:sde osd2:sda
 ¡Ø ceph@mgmt:~/cephcluster$ ceph-deploy osd create osd0:sda osd1:sde osd2:sda
      create ( prepare + activate)
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.22): /usr/bin/ceph-deploy --overwrite-conf osd prepare osd0:/dev/sda osd1:/dev/sde osd2:/dev/sda
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks osd0:/dev/sda: osd1:/dev/sde: osd2:/dev/sda:
[osd0][DEBUG ] connection detected need for sudo
[osd0][DEBUG ] connected to host: osd0 
[osd0][DEBUG ] detect platform information from remote host
[osd0][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Deploying osd to osd0
[osd0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[osd0][INFO  ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host osd0 disk /dev/sda journal None activate False
[osd0][INFO  ] Running command: sudo ceph-disk -v prepare --fs-type xfs --cluster ceph -- /dev/sda
[osd0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[osd0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[osd0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[osd0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[osd0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[osd0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[osd0][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sda
[osd0][WARNIN] DEBUG:ceph-disk:Creating journal partition num 2 size 5120 on /dev/sda
[osd0][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --new=2:0:5120M --change-name=2:ceph journal --partition-guid=2:522706fe-5a69-4727-843e-4de899a04601 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sda
[osd0][DEBUG ] 
[osd0][DEBUG ] ***************************************************************
[osd0][DEBUG ] Found invalid GPT and valid MBR; converting MBR to GPT format
[osd0][DEBUG ] in memory. 
[osd0][DEBUG ] ***************************************************************
[osd0][DEBUG ] 
[osd0][DEBUG ] The operation has completed successfully.
[osd0][WARNIN] DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sda
[osd0][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sda
[osd0][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
[osd0][WARNIN] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/522706fe-5a69-4727-843e-4de899a04601
[osd0][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/sda
[osd0][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:998d2905-1035-4de7-9bdc-d140c25f15cd --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sda
[osd0][DEBUG ] The operation has completed successfully.
[osd0][WARNIN] DEBUG:ceph-disk:Calling partprobe on created device /dev/sda
[osd0][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sda
[osd0][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
[osd0][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/sda1
[osd0][WARNIN] INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sda1
[osd0][DEBUG ] meta-data=/dev/sda1              isize=2048   agcount=17, agsize=268435455 blks
[osd0][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=0
[osd0][DEBUG ] data     =                       bsize=4096   blocks=4393218811, imaxpct=5
[osd0][DEBUG ]          =                       sunit=0      swidth=0 blks
[osd0][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0
[osd0][DEBUG ] log      =internal log           bsize=4096   blocks=521728, version=2
[osd0][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
[osd0][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[osd0][WARNIN] DEBUG:ceph-disk:Mounting /dev/sda1 on /var/lib/ceph/tmp/mnt.8Ofa8m with options noatime,inode64
[osd0][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sda1 /var/lib/ceph/tmp/mnt.8Ofa8m
[osd0][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.8Ofa8m
[osd0][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.8Ofa8m/journal -> /dev/disk/by-partuuid/522706fe-5a69-4727-843e-4de899a04601
[osd0][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.8Ofa8m
[osd0][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.8Ofa8m
[osd0][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sda
[osd0][DEBUG ] The operation has completed successfully.
[osd0][WARNIN] DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sda
[osd0][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sda
[osd0][INFO  ] checking OSD status...
[osd0][INFO  ] Running command: sudo ceph --cluster=ceph osd stat --format=json
[osd0][WARNIN] there are 6 OSDs down
[osd0][WARNIN] there are 6 OSDs out
[ceph_deploy.osd][DEBUG ] Host osd0 is now ready for osd use.
[osd1][DEBUG ] connection detected need for sudo
[osd1][DEBUG ] connected to host: osd1 
[osd1][DEBUG ] detect platform information from remote host
[osd1][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Deploying osd to osd1
[osd1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[osd1][INFO  ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host osd1 disk /dev/sde journal None activate False
[osd1][INFO  ] Running command: sudo ceph-disk -v prepare --fs-type xfs --cluster ceph -- /dev/sde
[osd1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[osd1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[osd1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[osd1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[osd1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[osd1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[osd1][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sde
[osd1][WARNIN] DEBUG:ceph-disk:Creating journal partition num 2 size 5120 on /dev/sde
[osd1][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --new=2:0:5120M --change-name=2:ceph journal --partition-guid=2:568d3411-c56d-48c4-9ba8-f1c1532feb0c --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sde
[osd1][DEBUG ] Creating new GPT entries.
[osd1][DEBUG ] The operation has completed successfully.
[osd1][WARNIN] DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sde
[osd1][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sde
[osd1][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
[osd1][WARNIN] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/568d3411-c56d-48c4-9ba8-f1c1532feb0c
[osd1][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/sde
[osd1][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:0335f4fd-816a-4160-be90-f15751aa2dca --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sde
[osd1][DEBUG ] The operation has completed successfully.
[osd1][WARNIN] DEBUG:ceph-disk:Calling partprobe on created device /dev/sde
[osd1][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sde
[osd1][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
[osd1][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/sde1
[osd1][WARNIN] INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sde1
[osd1][DEBUG ] meta-data=/dev/sde1              isize=2048   agcount=17, agsize=268435455 blks
[osd1][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=0
[osd1][DEBUG ] data     =                       bsize=4096   blocks=4393218811, imaxpct=5
[osd1][DEBUG ]          =                       sunit=0      swidth=0 blks
[osd1][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0
[osd1][DEBUG ] log      =internal log           bsize=4096   blocks=521728, version=2
[osd1][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
[osd1][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[osd1][WARNIN] DEBUG:ceph-disk:Mounting /dev/sde1 on /var/lib/ceph/tmp/mnt.ownrwF with options noatime,inode64
[osd1][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sde1 /var/lib/ceph/tmp/mnt.ownrwF
[osd1][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.ownrwF
[osd1][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.ownrwF/journal -> /dev/disk/by-partuuid/568d3411-c56d-48c4-9ba8-f1c1532feb0c
[osd1][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.ownrwF
[osd1][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.ownrwF
[osd1][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sde
[osd1][DEBUG ] The operation has completed successfully.
[osd1][WARNIN] DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sde
[osd1][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sde
[osd1][INFO  ] checking OSD status...
[osd1][INFO  ] Running command: sudo ceph --cluster=ceph osd stat --format=json
[osd1][WARNIN] there are 7 OSDs down
[osd1][WARNIN] there are 7 OSDs out
[ceph_deploy.osd][DEBUG ] Host osd1 is now ready for osd use.
[osd2][DEBUG ] connection detected need for sudo
[osd2][DEBUG ] connected to host: osd2 
[osd2][DEBUG ] detect platform information from remote host
[osd2][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Deploying osd to osd2
[osd2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[osd2][INFO  ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host osd2 disk /dev/sda journal None activate False
[osd2][INFO  ] Running command: sudo ceph-disk -v prepare --fs-type xfs --cluster ceph -- /dev/sda
[osd2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[osd2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[osd2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[osd2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[osd2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[osd2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[osd2][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sda
[osd2][WARNIN] DEBUG:ceph-disk:Creating journal partition num 2 size 5120 on /dev/sda
[osd2][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --new=2:0:5120M --change-name=2:ceph journal --partition-guid=2:281703d4-6186-4eb0-9c59-cd51b9df8d08 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sda
[osd2][DEBUG ] Creating new GPT entries.
[osd2][DEBUG ] The operation has completed successfully.
[osd2][WARNIN] DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sda
[osd2][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sda
[osd2][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
[osd2][WARNIN] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/281703d4-6186-4eb0-9c59-cd51b9df8d08
[osd2][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/sda
[osd2][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:2935fe8a-842b-4ab3-bdc4-bd0c420a8007 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sda
[osd2][DEBUG ] The operation has completed successfully.
[osd2][WARNIN] DEBUG:ceph-disk:Calling partprobe on created device /dev/sda
[osd2][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sda
[osd2][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
[osd2][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/sda1
[osd2][WARNIN] INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sda1
[osd2][DEBUG ] meta-data=/dev/sda1              isize=2048   agcount=17, agsize=268435455 blks
[osd2][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=0
[osd2][DEBUG ] data     =                       bsize=4096   blocks=4393218811, imaxpct=5
[osd2][DEBUG ]          =                       sunit=0      swidth=0 blks
[osd2][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0
[osd2][DEBUG ] log      =internal log           bsize=4096   blocks=521728, version=2
[osd2][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
[osd2][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[osd2][WARNIN] DEBUG:ceph-disk:Mounting /dev/sda1 on /var/lib/ceph/tmp/mnt.03ACQc with options noatime,inode64
[osd2][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sda1 /var/lib/ceph/tmp/mnt.03ACQc
[osd2][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.03ACQc
[osd2][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.03ACQc/journal -> /dev/disk/by-partuuid/281703d4-6186-4eb0-9c59-cd51b9df8d08
[osd2][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.03ACQc
[osd2][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.03ACQc
[osd2][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sda
[osd2][DEBUG ] The operation has completed successfully.
[osd2][WARNIN] DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sda
[osd2][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sda
[osd2][INFO  ] checking OSD status...
[osd2][INFO  ] Running command: sudo ceph --cluster=ceph osd stat --format=json
[osd2][WARNIN] there are 8 OSDs down
[osd2][WARNIN] there are 8 OSDs out
[ceph_deploy.osd][DEBUG ] Host osd2 is now ready for osd use.

ceph@mgmt:~/cephcluster$ ceph-deploy osd activate osd0:sda1 osd1:sde1 osd2:sda1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.22): /usr/bin/ceph-deploy osd activate osd0:sda1 osd1:sde1 osd2:sda1
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks osd0:/dev/sda1: osd1:/dev/sde1: osd2:/dev/sda1:
[osd0][DEBUG ] connection detected need for sudo
[osd0][DEBUG ] connected to host: osd0 
[osd0][DEBUG ] detect platform information from remote host
[osd0][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] activating host osd0 disk /dev/sda1
[ceph_deploy.osd][DEBUG ] will use init type: upstart
[osd0][INFO  ] Running command: sudo ceph-disk -v activate --mark-init upstart --mount /dev/sda1
[osd0][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sda1
[osd0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[osd0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[osd0][WARNIN] DEBUG:ceph-disk:Mounting /dev/sda1 on /var/lib/ceph/tmp/mnt.E14bfL with options noatime,inode64
[osd0][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sda1 /var/lib/ceph/tmp/mnt.E14bfL
[osd0][WARNIN] DEBUG:ceph-disk:Cluster uuid is 38e6bdde-e19e-4801-8dc0-0e7a47734611
[osd0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[osd0][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
[osd0][WARNIN] DEBUG:ceph-disk:OSD uuid is 2b099483-4b12-484d-9e28-18da0b84b1db
[osd0][WARNIN] DEBUG:ceph-disk:OSD id is 0
[osd0][WARNIN] DEBUG:ceph-disk:Marking with init system upstart
[osd0][WARNIN] DEBUG:ceph-disk:ceph osd.0 data dir is ready at /var/lib/ceph/tmp/mnt.E14bfL
[osd0][WARNIN] INFO:ceph-disk:ceph osd.0 already mounted in position; unmounting ours.
[osd0][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.E14bfL
[osd0][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.E14bfL
[osd0][WARNIN] DEBUG:ceph-disk:Starting ceph osd.0...
[osd0][WARNIN] INFO:ceph-disk:Running command: /sbin/initctl emit --no-wait -- ceph-osd cluster=ceph id=0
[osd0][INFO  ] checking OSD status...
[osd0][INFO  ] Running command: sudo ceph --cluster=ceph osd stat --format=json
[osd1][DEBUG ] connection detected need for sudo
[osd1][DEBUG ] connected to host: osd1 
[osd1][DEBUG ] detect platform information from remote host
[osd1][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] activating host osd1 disk /dev/sde1
[ceph_deploy.osd][DEBUG ] will use init type: upstart
[osd1][INFO  ] Running command: sudo ceph-disk -v activate --mark-init upstart --mount /dev/sde1
[osd1][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sde1
[osd1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[osd1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[osd1][WARNIN] DEBUG:ceph-disk:Mounting /dev/sde1 on /var/lib/ceph/tmp/mnt.i6zkTK with options noatime,inode64
[osd1][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sde1 /var/lib/ceph/tmp/mnt.i6zkTK
[osd1][WARNIN] DEBUG:ceph-disk:Cluster uuid is 38e6bdde-e19e-4801-8dc0-0e7a47734611
[osd1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[osd1][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
[osd1][WARNIN] DEBUG:ceph-disk:OSD uuid is 691c4261-d8f9-4a19-bec3-469ce7e05dd6
[osd1][WARNIN] DEBUG:ceph-disk:OSD id is 1
[osd1][WARNIN] DEBUG:ceph-disk:Marking with init system upstart
[osd1][WARNIN] DEBUG:ceph-disk:ceph osd.1 data dir is ready at /var/lib/ceph/tmp/mnt.i6zkTK
[osd1][WARNIN] INFO:ceph-disk:ceph osd.1 already mounted in position; unmounting ours.
[osd1][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.i6zkTK
[osd1][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.i6zkTK
[osd1][WARNIN] DEBUG:ceph-disk:Starting ceph osd.1...
[osd1][WARNIN] INFO:ceph-disk:Running command: /sbin/initctl emit --no-wait -- ceph-osd cluster=ceph id=1
[osd1][INFO  ] checking OSD status...
[osd1][INFO  ] Running command: sudo ceph --cluster=ceph osd stat --format=json
[osd2][DEBUG ] connection detected need for sudo
[osd2][DEBUG ] connected to host: osd2 
[osd2][DEBUG ] detect platform information from remote host
[osd2][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] activating host osd2 disk /dev/sda1
[ceph_deploy.osd][DEBUG ] will use init type: upstart
[osd2][INFO  ] Running command: sudo ceph-disk -v activate --mark-init upstart --mount /dev/sda1
[osd2][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sda1
[osd2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[osd2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[osd2][WARNIN] DEBUG:ceph-disk:Mounting /dev/sda1 on /var/lib/ceph/tmp/mnt.Upvrm9 with options noatime,inode64
[osd2][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sda1 /var/lib/ceph/tmp/mnt.Upvrm9
[osd2][WARNIN] DEBUG:ceph-disk:Cluster uuid is 38e6bdde-e19e-4801-8dc0-0e7a47734611
[osd2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[osd2][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
[osd2][WARNIN] DEBUG:ceph-disk:OSD uuid is 37e68566-c276-4091-96da-e7941bc48bc3
[osd2][WARNIN] DEBUG:ceph-disk:OSD id is 2
[osd2][WARNIN] DEBUG:ceph-disk:Marking with init system upstart
[osd2][WARNIN] DEBUG:ceph-disk:ceph osd.2 data dir is ready at /var/lib/ceph/tmp/mnt.Upvrm9
[osd2][WARNIN] INFO:ceph-disk:ceph osd.2 already mounted in position; unmounting ours.
[osd2][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.Upvrm9
[osd2][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.Upvrm9
[osd2][WARNIN] DEBUG:ceph-disk:Starting ceph osd.2...
[osd2][WARNIN] INFO:ceph-disk:Running command: /sbin/initctl emit --no-wait -- ceph-osd cluster=ceph id=2
[osd2][INFO  ] checking OSD status...
[osd2][INFO  ] Running command: sudo ceph --cluster=ceph osd stat --format=json


7. ceph.config °ú admin key ÆÄÀϺ¹»ç 
ceph@mgmt:~/cephcluster$ ceph-deploy admin mgmt mon0 mon1 mon2 osd0 osd1 osd2 mds0


8. mds (¸ÞŸµ¥ÀÌŸ¼­¹ö) »ý¼º
ceph@mgmt:~/cephcluster$ ceph-deploy mds create mds0
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.22): /usr/bin/ceph-deploy mds create mds0
[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts mds0:mds0
[mds0][DEBUG ] connection detected need for sudo
[mds0][DEBUG ] connected to host: mds0 
[mds0][DEBUG ] detect platform information from remote host
[mds0][DEBUG ] detect machine type
[ceph_deploy.mds][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.mds][DEBUG ] remote host will use upstart
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to mds0
[mds0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[mds0][WARNIN] mds keyring does not exist yet, creating one
[mds0][DEBUG ] create a keyring file
[mds0][DEBUG ] create path if it doesn't exist
[mds0][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.mds0 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-mds0/keyring
[mds0][INFO  ] Running command: sudo initctl emit ceph-mds cluster=ceph id=mds0


9. »óÅÂÈ®ÀÎ
ceph@mgmt:~$ ceph status
    cluster 38e6bdde-e19e-4801-8dc0-0e7a47734611
     health HEALTH_OK
     monmap e1: 3 mons at {mon0=115.XXX.XXX.60:6789/0,mon1=115.XXX.XXX.61:6789/0,mon2=115.XXX.XXX.62:6789/0}, election epoch 8, quorum 0,1,2 mon0,mon1,mon2
     osdmap e14: 3 osds: 3 up, 3 in
      pgmap v25: 64 pgs, 1 pools, 0 bytes data, 0 objects
            106 MB used, 50270 GB / 50270 GB avail
                  64 active+clean


OSD Node
root@osd0:~# df
Filesystem       1K-blocks    Used   Available Use% Mounted on
/dev/sdb1         15250832 3408076    11045012  24% /
none                     4       0           4   0% /sys/fs/cgroup
udev               8061472      12     8061460   1% /dev
tmpfs              1614496     496     1614000   1% /run
none                  5120       0        5120   0% /run/lock
none               8072476       8     8072468   1% /run/shm
none                102400       0      102400   0% /run/user
/dev/sda1      17570788332   37948 17570750384   1% /var/lib/ceph/osd/ceph-0


À̸§ Æнº¿öµå
ºñ¹Ð±Û (üũÇÏ¸é ±Û¾´À̸¸ ³»¿ëÀ» È®ÀÎÇÒ ¼ö ÀÖ½À´Ï´Ù.)
¿ÞÂÊÀÇ ±ÛÀÚ¸¦ ÀÔ·ÂÇϼ¼¿ä.
   

 



 
»çÀÌÆ®¸í : ¸ðÁö¸®³× | ´ëÇ¥ : ÀÌ°æÇö | °³ÀÎÄ¿¹Â´ÏƼ : ·©Å°´åÄÄ ¿î¿µÃ¼Á¦(OS) | °æ±âµµ ¼º³²½Ã ºÐ´ç±¸ | ÀüÀÚ¿ìÆí : mojily°ñ¹ðÀÌchonnom.com Copyright ¨Ï www.chonnom.com www.kyunghyun.net www.mojily.net. All rights reserved.