I see lkvm output for following docker command
lkvm run -c 6 -m 1024 --name
virtio --kernel /usr/lib/kernel/vmlinux.container --params
systemd.unit=container.target rw tsc=reliable systemd.show_status=false
no_timer_check rcupdate.rcu_expedited=1 console=hvc0 quiet
Can you please point me to the command line option which makes /data seen
inside the vm?
Thanks for your help here
EMC Office of CTO
On 11/3/15, 3:50 AM, "Dimitri John Ledkov" <dimitri.j.ledkov(a)intel.com>
On 2 November 2015 at 23:50, Khanduja, Vaibhav <vaibhav.khanduja(a)emc.com>
> Thanks .. I will try this out Š
> Meanwhile, I had an another quick question. I was able to successfully
> a container with external volumes
> docker run -d -v /data:/data ubuntu sleep 5000
> I see the container gets /data bind mounted inside as volume. Can you
> please elaborate a bit on how volumes are made to see inside the virtual
> machine? Are you using virtFS 9p virtio device and making use of 9mount
> 9bind for such purpose?
Current implementation takes the host folder, exports it as a virtio
9p vfs into the guest VM, which is then mounted into the workload
chroot inside the VM.
Directly exporting block devices to the VM, or using other networked
file-systems are other possibilities. But for example docker graph
driver currently does not provide sufficient API to allow accessing
LVM managed volumes for export to the exec driver. Such ability should
be requested and explored upstream.
> Technologist, EMC office of CTO
> On 11/2/15, 3:42 AM, "Dimitri John Ledkov"
>>On 31 October 2015 at 00:44, Khanduja, Vaibhav
>>> As part of installation on ubuntu, I see a kernel image under
>>> For me to have a different kernel image, do I have to use the
>>> under /usr/lib?
>>> And are there any suggestions as how kernel modules can be loaded for
>>> kernel running in the container?
>>linux-container flavour shipped for Clear Containers for Docker Engine
>>has no loadable modules - this saves like 1.2MB on the kernel image
>>size and speeds-up container boot time. And the fact that the kernel
>>image on the host, doesn't have to be in sync with the guest image
>>(with loadable modules) helps a lot to decouple both.
>>If there is a use-case to have something available in the VM, please
>>request it, and if reasonable we can enable it by default.
>>The fact that kernel is decoupled from the container VM, it does mean
>>that on the host you can use a different and even your own kernel.
>>I would recommend to start from our sources of the linux-container
>>package. You can get source rpm from
>>https://download.clearlinux.org/releases/, or you can get dsc / srpm
>>from opensuse build service publications
>>as the docker clr exec driver relies on some of patches present there
>>(specifically lkvm driver for now)
>>Once you have your custom kernel image built, with modules you would
>>like to see enabled, enabled as built-ins, you then can do following.
>>Divert vmlinux.container away with dpkg-divert. And place/package
>>symlink to your own kernel image instead of vmlinux.container.
> >From that point onward, all containers will use your own kernel.
>>Or, simply tell us what would you like to have enabled in the
>>container kernel image? =)
>>63 sleeps till Christmas, or less
>>Open Source Technology Center
>>Intel Corporation (UK) Ltd. - Co. Reg. #1134945 - Pipers Way, Swindon
53 sleeps till Christmas, or less
Open Source Technology Center
Intel Corporation (UK) Ltd. - Co. Reg. #1134945 - Pipers Way, Swindon SN3