PolKit rule and API matchaccess_drivers = [ "polkit" ]
by Θεοφάνης Κοντογιάννης
Hi All,
I am trying to implement the following use case.
User sfrag is logged on the host via ssh.
Running 'virsh list --all' should trigger PolKit authentication and present ALL domains suffixed with -SF
I have used and adapted the example from: libvirt.org Git - libvirt.git/blob - examples/polkit/libvirt-acl.rules
|
|
|
| | |
|
|
|
| |
libvirt.org Git - libvirt.git/blob - examples/polkit/libvirt-acl.rules
|
|
|
Adapted the setup so that I included user sfrag.
Always the user was asked to authenticate via root and not via SELF but ONLY if running "virsh -c qemu:///system list --all"
Had to change /etc/libvirt/libvirtd.conf to include:
auth_unix_ro = "polkit"access_drivers = [ "polkit" ]log_filters="1:access.accessdriverpolkit"log_outputs="1:file:/var/log/libvirt/libvirtd.log"
All polkit rules for user sfrag was removed at this point.
Now the user sfrag running 'virsh list --all' gives no output to /var/log/libvirt/libvirtd.log or /var/log/secure.
Running the same as user root gives interesting results in the logs:
org.libvirt.api.connect.getattrorg.libvirt.api.connect.search-domainsorg.libvirt.api.domain.getattr (fore every defined domain)org.libvirt.api.domain.read (again for every defined domain).
Virsh is using qemu:///session as the default URI.
Why running virsh as non-root is not triggering polkit or any API calls (based on log files output) and running the same as root, gives all the interesting output?
Which implies that running virsh as root results in different actions compaired to calling it as non root.
Thank you for Your time.
BR
Theophanis Kontogiannis
4 years, 8 months
Getting Intel RDT cache allocation status from libvirt?
by Neil Moore
Is there a way to get the current cache allocation status of a host
from libvirt, via API or virsh?
I've seen some references to a 'virsh nodecachestats' command in some
patches from 2017 that doesn't seem to exist, and I see that 'virsh
domstats --cpu-total <domain>' has similar information but seems to be
focused on cache monitoring information rather than the allocation
status.
Thanks,
Neil
4 years, 8 months
when virEventAddTimeout trigger timeout ,should in the callback call virConnectDomainEventDeregisterAny ?
by thomas.kuang
hi, all
I do a hotplug detach a network in a thread, because virDomainDetachDeviceFlags maybe asynchronous, so I do like follow:
cb_para->cluster_id = info->cluster_id;
cb_para->group_id = info->group_id;
cb_para->vsys_id = info->vsysid;
cb_para->vnf_id = info->vnf_id;
cb_para->conn = conn;
cb_para->time_out = 20*1000;//20s
cb_para->call_id = virConnectDomainEventRegisterAny(conn, dom, VIR_DOMAIN_EVENT_ID_DEVICE_REMOVED, VIR_DOMAIN_EVENT_CALLBACK(vnf_control_del_network_cb), cb_para, vnf_control_del_network_cb_free);
flags |= VIR_DOMAIN_AFFECT_CONFIG;
if (virDomainIsActive(dom) == 1) {
flags |= VIR_DOMAIN_AFFECT_LIVE;
}
cb_para->timer_id = virEventAddTimeout(cb_para->time_out, vnf_control_del_network_timeout_cb, cb_para, vnf_control_del_network_cb_free);
ret = virDomainDetachDeviceFlags(dom, xml, flags);
//above run in a thread function
void vnf_control_del_network_cb(virConnectPtr conn, virDomainPtr dom, const char *dev,void * opaque)
{
struct vnf_del_netwk_opaque * arg = (struct vnf_del_netwk_opaque *)opaque;
if(0 == virConnectDomainEventDeregisterAny(conn, arg->call_id)) {
VNF_DBG("succ to deRegister, conn:%p, call id:%d\n", conn, arg->call_id);
} else {
VNF_DBG("fail to deRegister, conn:%p, call id:%d\n", conn, arg->call_id);
}
....................
virEventRemoveTimeout(arg->timer_id);
}
void vnf_control_del_network_timeout_cb(int timer, void *opaque)
{
struct vnf_del_netwk_opaque * arg = (struct vnf_del_netwk_opaque *)opaque;
................
if(0 == virConnectDomainEventDeregisterAny(arg->conn, arg->call_id)) { //here, dead lock ,are thoses APIs not correct using ??
VNF_DBG("succ to deRegister, conn:%p, call id:%d\n", arg->conn, arg->call_id);
} else {
VNF_DBG("fail to deRegister, conn:%p, call id:%d\n", arg->conn, arg->call_id);
}
virEventRemoveTimeout(arg->timer_id);
}
(gdb) i threads
Id Target Id Frame
7 Thread 0x7f29fa5ff700 (LWP 104950) "vnfd" 0x00007f2a064f656d in nanosleep () from /lib64/libc.so.6
6 Thread 0x7f29f9bfe700 (LWP 104951) "vnfd" 0x00007f2a0761851d in __lll_lock_wait () from /lib64/libpthread.so.0
5 Thread 0x7f29f8dff700 (LWP 104952) "vnfd" 0x00007f2a064f656d in nanosleep () from /lib64/libc.so.6
4 Thread 0x7f29f7fff700 (LWP 104953) "vnfd" 0x00007f2a064f656d in nanosleep () from /lib64/libc.so.6
3 Thread 0x7f29f71ff700 (LWP 104954) "vnfd" 0x00007f2a064f656d in nanosleep () from /lib64/libc.so.6
2 Thread 0x7f29f63ff700 (LWP 104955) "vnfd" 0x00007f2a064f656d in nanosleep () from /lib64/libc.so.6
* 1 Thread 0x7f2a087f8900 (LWP 104946) "vnfd" 0x00007f2a06530183 in epoll_wait () from /lib64/libc.so.6
(gdb) thread 6
[Switching to thread 6 (Thread 0x7f29f9bfe700 (LWP 104951))]
#0 0x00007f2a0761851d in __lll_lock_wait () from /lib64/libpthread.so.0
(gdb) bt
#0 0x00007f2a0761851d in __lll_lock_wait () from /lib64/libpthread.so.0
#1 0x00007f2a07613e1b in _L_lock_812 () from /lib64/libpthread.so.0
#2 0x00007f2a07613ce8 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3 0x00007f2a072a1a0e in remoteConnectClose () from /lib64/libvirt.so.0
#4 0x00007f2a072b2740 in virConnectDispose () from /lib64/libvirt.so.0
#5 0x00007f2a0710bbab in virObjectUnref () from /lib64/libvirt.so.0
#6 0x00007f2a07195577 in virObjectEventCallbackFree () from /lib64/libvirt.so.0
#7 0x00007f2a07196532 in virObjectEventStateDeregisterID () from /lib64/libvirt.so.0
#8 0x00007f2a07287238 in remoteConnectDomainEventDeregisterAny () from /lib64/libvirt.so.0
#9 0x00007f2a072d36d7 in virConnectDomainEventDeregisterAny () from /lib64/libvirt.so.0
#10 0x0000000000405bce in vnf_control_del_network_timeout_cb (timer=<optimized out>, opaque=0x7f29e9e99de0) at vnf_mgt/vnf_control.c:1293
#11 0x00007f2a070d20e9 in virEventPollRunOnce () from /lib64/libvirt.so.0
#12 0x00007f2a070d0a42 in virEventRunDefaultImpl () from /lib64/libvirt.so.0
#13 0x000000000040d099 in vnf_worker_proc (arg=<optimized out>) at vnf_mgt/vnf_control.c:1727
#14 0x00007f2a07611e25 in start_thread () from /lib64/libpthread.so.0
#15 0x00007f2a0652fbad in clone () from /lib64/libc.so.6
(gdb) f 10
#10 0x0000000000405bce in vnf_control_del_network_timeout_cb (timer=<optimized out>, opaque=0x7f29e9e99de0) at vnf_mgt/vnf_control.c:1293
1293 if(0 == virConnectDomainEventDeregisterAny(arg->conn, arg->call_id)) {
(gdb) p arg->time_id
There is no member named time_id.
(gdb) set print pretty
(gdb) p *arg
$1 = {
cluster_id = 0,
vsys_id = 0,
group_id = 2,
vnf_id = 1,
call_id = 0,
timer_id = 16,
time_out = 20000,
conn = 0x7f29f8013000
}
(gdb) quit
A debugging session is active.
4 years, 8 months
Getting Intel RDT cache allocation status from libvirt?
by Neil Moore
Is there a way to get the current cache allocation status of a host
from libvirt, via API or virsh?
I've seen some references to a 'virsh nodecachestats' command in some
patches from 2017 that doesn't seem to exist, and I see that 'virsh
domstats --cpu-total <domain>' has similar information but seems to be
focused on cache monitoring information rather than the allocation
status.
Thanks,
Neil
4 years, 8 months
why virConnectDomainEventRegisterAny can't alway trigger the callback ,how can i get a stable callback ?
by thomas.kuang
hi, all:
I create a vm with six nic, after the vm start, i delete tree nics.
all the three nic delete logic will happen in a thread , every nic delete has the following process:
int vnf_control_del_network(void *arg)
{
。。。。。
call_id = virConnectDomainEventRegisterAny(conn, dom, VIR_DOMAIN_EVENT_ID_DEVICE_REMOVED, VIR_DOMAIN_EVENT_CALLBACK(vnf_control_del_network_cb), cb_para, vnf_control_del_network_cb_free);
flags |= VIR_DOMAIN_AFFECT_CONFIG;
if (virDomainIsActive(dom) == 1) {
flags |= VIR_DOMAIN_AFFECT_LIVE;
}
ret = virDomainDetachDeviceFlags(dom, xml, flags); // detach a nic from vm guest os
。。。。
}
void vnf_control_del_network_cb(virConnectPtr conn, virDomainPtr dom, const char *dev,void * opaque) //this callback can't already trigger to run ,why ?
{
struct vnf_del_netwk_opaque * arg = (struct vnf_del_netwk_opaque *)opaque;
........ do someing;
if(0 == virConnectDomainEventDeregisterAny(conn, arg->call_id))
printf("succ to deRegister, conn:%p, call id:%d\n", conn, arg->call_id);
else
printf("fail to deRegister, conn:%p, call id:%d\n", conn, arg->call_id)
}
void* vnf_worker_proc(void *arg)
{
vnf_mission_t *mission = NULL;
pthread_t tid = pthread_self();
vnf_task_ctx_t *task = vnf_task_get_task_info(tid);
assert(task);
pthread_detach(tid);
while (1) {
mission = vnf_mission_queue_get(task);
if (mission == NULL) {
sleep(1);
continue;
}
VNF_IMAGE_DBG("tid:%lu, get one mission from mission queue\n", tid);
vnf_op_process(&mission->info); //this cause vnf_control_del_network called
if (mission) {
vnf_mission_free(mission);
}
if(virEventRunDefaultImpl() < 0) {
VNF_IMAGE_DBG("virEventRunDefaultImpl() called failure\n");
}
}
return NULL;
}
4 years, 8 months
libvirt remote uri format
by Joe Muro
Hi,
I am using python libvirt api to get domain information. When using a uri
without specifying the socket path, an error occurs.
uri = "qemu+ssh://myuser@some.kvm.host/system"
conn = libvirt.open(uri)
This results in the following:
libvirt: XML-RPC error : internal error: received hangup event on socket
If I append the socket path to the URI, it works. e.g.
qemu+ssh://myuser@some.kvm.host/system?socket=/var/run/libvirt/libvirt-sock
Is this the recommended way to construct an ssh uri? My concern is that the
socket path may be different when connecting to different libvirt hosts.
remote host is ubuntu 20.04 running libvirtd (libvirt) 6.0.0 under systemd
- Joe
4 years, 8 months
libvirt Source RPMs for CentOS or RHEL?
by FuLong Wang
Hello Experts,
Do we have libvirt source rpms (version above 5.9.0) for CentOS or RHEL?
I only find source rpms for fedora in below public link.
https://libvirt.org/sources/
--
FuLong Wang
fulong.wang(a)cn.ibm.com
IBM China Systems Lab, Beijing, China
_______________________________________________
4 years, 8 months
can libvirt.so use jemalloc to manage mem ?
by thomas.kuang
HI, all
My daemon exists a libvirt API call:
virEventRegisterDefaultImpl();
once I called the virEventRegisterDefaultImpl() ,must be core ,the bt stack is:
(gdb) bt
#0 free (ptr=0x1) at include/jemalloc/internal/arena.h:652
#1 0x00007f57690a488a in virFree () from /lib64/libvirt.so.0
#2 0x00007f57690c3562 in virResetError () from /lib64/libvirt.so.0
#3 0x00007f57690c49ad in virEventRegisterDefaultImpl () from /lib64/libvirt.so.0
#4 0x00000000004029ad in main (argc=<optimized out>, argv=<optimized out>) at vnf_mgt/vnf_control.c:2920
(gdb) quit
the following code copy from libvirt/src/util /
int virEventRegisterDefaultImpl(void)
{
VIR_DEBUG("registering default event implementation");
virResetLastError();
if (virEventPollInit() < 0) {
virDispatchError(NULL);
return -1;
}
virEventRegisterImpl(virEventPollAddHandle,
virEventPollUpdateHandle,
virEventPollRemoveHandle,
virEventPollAddTimeout,
virEventPollUpdateTimeout,
virEventPollRemoveTimeout);
return 0;
}
void
virResetLastError(void)
{
virErrorPtr err = virLastErrorObject();
if (err)
virResetError(err);
}
static virErrorPtr
virLastErrorObject(void)
{
virErrorPtr err;
err = virThreadLocalGet(&virLastErr);
if (!err) {
if (VIR_ALLOC_QUIET(err) < 0)
return NULL;
if (virThreadLocalSet(&virLastErr, err) < 0)
VIR_FREE(err);
}
return err;
}
void
virResetError(virErrorPtr err)
{
if (err == NULL)
return;
VIR_FREE(err->message);
VIR_FREE(err->str1);
VIR_FREE(err->str2);
VIR_FREE(err->str3);
memset(err, 0, sizeof(virError));
}
# define VIR_FREE(ptr) virFree(1 ? (void *) &(ptr) : (ptr))
void virFree(void *ptrptr)
{
int save_errno = errno;
free(*(void**)ptrptr);
*(void**)ptrptr = NULL;
errno = save_errno;
}
when my daemon link with jemalloc must core,but if i use glibc to manage memory ,then it work fine, why ?
4 years, 8 months