On 03/31/2016 06:43 PM, Peter Steele wrote:
I've created an EC2 AMI for AWS that essentially represents a
CentOS 7
"hypervisor" image. I deploy instances of these in AWS and create an
number of libvirt based lxc containers on each of these instances. The
containers run fine within a single host and have no problem
communicating with themselves as well as with their host, and vice
versa. However, containers hosted in one EC2 instance cannot
communicate with containers hosted in another EC2 instance.
We've tried various tweaks with our Amazon VPC but have been unable to
find a way to solve this networking issue. If I use something like
VMware or KVM and create VMs using this same hypervisor image, the
containers running under these VMs can communicate with with each
other, even across different hosts.
What is the <interface> config of your nested containers? Do they each
get a public IP address?
My real question is has anyone tried deploying EC2 images that host
containers and have figured out how to successfully communicate
between containers on different hosts?
No experience with EC2, sorry.