概述
在ceph.conf中的client域中增加如下:
admin_socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log_file = /var/log/qemu/qemu-guest-$pid.log
创建log目录和unix socket目录:
mkdir -p /var/run/ceph/guests/ /var/log/qemu/
修改上述目录权限:
chown qemu:qemu /var/log/qemu/ /var/run/ceph/guests
virsh重启虚机:
[root
@nova10
ceph]# virsh shutdown instance-000005ea
Domain instance-000005ea is being shutdown
[root
@nova10
ceph]# virsh start instance-000005ea
Domain instance-000005ea started
- 问题:
在/var/log/qemu/目录下生成qemu-guest-111572.log文件,报错如下:
2018
-
01
-
22
11
:
18
:
55.737790
7f8b67509d00 -
1
auth: unable to find a keyring on /etc/ceph/ceph.client.openstack.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (
2
) No such file or directory
-
- auth已经关了,这块还需要,没有深入的看,直接增加一个ceph.client.openstack.keyring文件:
ceph auth list 先查看openstack这个用户的auth keyring
client.openstack
key: AQD4j4FZnChLGRAA1ElxLLZ45HfAQhC0QhKPVw==
caps: [mon] allow r
caps: [osd] allow
class
-read object_prefix rbd_children, allow rwx pool=openstack-pool-9ecb83e9-fe6c-
4519
-aeed-6d7646b05aae
- auth已经关了,这块还需要,没有深入的看,直接增加一个ceph.client.openstack.keyring文件:
-
-
- 在计算节点的/etc/ceph/目录下创建ceph.client.openstack.keyring文件,并将上面拿到的openstack的auth key复制进去:
-
不过需要注意: 在keyring文件中,和auth list看到的展示形式不太一样,做如下修改:
-
-
-
[client.openstack] ----- 带上中括号
key = AQD4j4FZnChLGRAA1ElxLLZ45HfAQhC0QhKPVw== ----- 将key后面的
':'
修改为
'='
caps mon =
"allow r"
----- 将caps后的
':'
去掉,将mon的中括号去掉,在mon后面加上
'='
,给具体的权限
'allow r'
加上双引号; 下面的其他caps也同理
caps osd =
"allow class-read object_prefix rbd_children, allow rwx pool=openstack-pool-9ecb83e9-fe6c-4519-aeed-6d7646b05aae"
-
-
-
在/var/log/qemu/目录下生成qemu-guest-111572.log文件,报错如下:
2018
-
01
-
22
11
:
18
:
55.737481
7f8b67509d00 -
1
asok(
0x7f8b6aba7ac0
) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to
'/var/run/ceph/guests/ceph-client.openstack.111572.140236769443840.asok'
: (
13
) Permission denied
-
-
- 这个是因为/var/run/ceph目录权限有问题,qemu起的这些虚拟机示例,其属主属组都是qemu,但是/var/run/ceph目录的属主属组是ceph:ceph,权限是770.
-
我在这里的解决办法是直接将/var/run/ceph目录的权限改为777:【另外,/var/log/qemu/也最好设置一下权限,设置为777,因为我不太清楚qemu进程的属主属组】
-
-
-
[root
@nova10
run]# ll | grep ceph
drwxrwx---.
3
ceph ceph
60
Jan
22
11
:
18
ceph
[root
@nova10
run]# chmod
777
ceph -R
[root
@nova10
run]# ll | grep ceph
drwxrwxrwx.
3
ceph ceph
60
Jan
22
11
:
18
ceph
-
-
- 通过admin_socket查看rbd相关:
注意:每个qemu进程可能使用多个rbd image。至少有一个,因为现在guest系统盘也使用的rbd image。我们看到的进程号后的那一串就是cookie
[root
@storage04
~]# rbd status 03a1f3cc-
6296
-
4953
-b5bb-38932f2e1cf6_disk -p vms
Watchers:
watcher=
192.168
.
34.106
:
0
/
2681227209
client.
7966888
cookie=
140061172315264
[root
@nova10
guests]# ceph daemon /var/run/ceph/guests/ceph-client.openstack.
119985.139891203856896
.asok help
{
"config diff"
:
"dump diff of current config and default config"
,
"config get"
:
"config get <field>: get the config value"
,
"config set"
:
"config set <field> <val> [<val> ...]: set a config variable"
,
"config show"
:
"dump current config settings"
,
"get_command_descriptions"
:
"list available commands"
,
"git_version"
:
"get git sha1"
,
"help"
:
"list available commands"
,
"log dump"
:
"dump recent log entries to log file"
,
"log flush"
:
"flush log entries to log file"
,
"log reopen"
:
"reopen log file"
,
"objecter_requests"
:
"show in-progress osd requests"
,
"perf dump"
:
"dump perfcounters value"
,
"perf reset"
:
"perf reset <name>: perf reset all or one perfcounter name"
,
"perf schema"
:
"dump perfcounters schema"
,
"rbd cache flush volumes/volume-3e76f65d-4e9b-48f4-ac94-b27b5fa4b21e"
:
"flush rbd image volumes/volume-3e76f65d-4e9b-48f4-ac94-b27b5fa4b21e
cache",
"rbd cache invalidate
volumes/volume-3e76f65d-4e9b-48f4-ac94-b27b5fa4b21e
": "
invalidate rbd
image volumes/volume-3e76f65d-4e9b-48f4-ac94-b27b5fa4b21e cache",
"version"
:
"get ceph version"
}
|
- rbd的log级别怎么调整?
通过每个使用rbd image的进程,产生的admin_socket来设置debug_rbd日志级别:
-
查看当前debug_rbd参数值:
[root
@nova10
ceph]# ceph daemon /var/run/ceph/guests/ceph-client.openstack.
119985.139891203856896
.asok config get debug_rbd {
"debug_rbd"
:
"0/0"
}
-
提高debug_rbd日志级别:
[root
@nova10
qemu]# ceph daemon /var/run/ceph/guests/ceph-client.openstack.
119985.139891203856896
.asok config set debug_rbd
20
/
20
{
"success"
:
""
}
-
查看日志:
[root
@nova10
qemu]# tailf qemu-guest-
119985
.log2018-
01
-
22
16
:
09
:
31.532995
7f3af3c93d00
20
librbd::AioImageRequestWQ: aio_write:
ictx=
0x7f3af584f200
, completion=
0x7f3af56a3740
, off=
2147647488
,
len=
1048576
, flags=
0
2018
-
01
-
22
16
:
09
:
31.533024
7f3af3c93d00
20
librbd::AioImageRequestWQ: queue: ictx=
0x7f3af584f200
, req=
0x7f3afeb84100
2018
-
01
-
22
16
:
09
:
31.533028
7f3af3c93d00
20
librbd::ExclusiveLock:
0x7f3af5976140
is_lock_owner=
1
2018
-
01
-
22
16
:
09
:
31.533059
7f3a3fe60700
20
librbd::AsyncOperation:
0x7f3af56a3878
start_op
2018
-
01
-
22
16
:
09
:
31.533073
7f3a3fe60700
20
librbd::AioImageRequestWQ: process: ictx=
0x7f3af584f200
, req=
0x7f3afeb84100
2018
-
01
-
22
16
:
09
:
31.533077
7f3a3fe60700
20
librbd::AioImageRequest: aio_write: ictx=
0x7f3af584f200
, completion=
0x7f3af56a3740
2018
-
01
-
22
16
:
09
:
31.533092
7f3a3fe60700
20
librbd::AioCompletion:
0x7f3af56a3740
set_request_count: pending=
1
2018
-
01
-
22
16
:
09
:
31.534269
7f3a3f25b700
20
librbd::AioCompletion:
0x7f3af56a3740
complete_request: cb=
1
, pending=
0
2018
-
01
-
22
16
:
09
:
31.534284
7f3a3f25b700
20
librbd::AioCompletion:
0x7f3af56a3740
finalize: r=
0
, read_buf=
0
, real_bl=
0
2018
-
01
-
22
16
:
09
:
31.534301
7f3a3f25b700
20
librbd::AsyncOperation:
0x7f3af56a3878
finish_op
最后
以上就是眼睛大胡萝卜为你收集整理的Ceph client上配置RBD log的全部内容,希望文章能够帮你解决Ceph client上配置RBD log所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
发表评论 取消回复