I think we're getting a little side tracked on how the OpenStack
scheduler works (although I'll still respond to issues with my
current use case). I still come back to the issue that I feel the
API is not returning the proper information (ignoring how that info
is used by the caller). Note that the API already returns a list
of CPU features, it just hides many of those features behind a code
name. We really should return the complete list of features for those
interested in specific features, while still providing the code name
for those interested in that shorthand.
Note that I can think of another use case where the complete feature
list makes sense. Imagine a dashboard that lists all platforms in
a datacenter/cloud. You might want to have the dashboard filter
its display to all platforms with Westmere CPUs, you could just as
easily want to filter the display to all platfomrs with AES/NI
capable CPUs.
On Mon, Jul 15, 2013 at 11:05:07AM +0100, Daniel P. Berrange wrote:
On Sun, Jul 14, 2013 at 11:43:57PM -0600, Don Dugger wrote:
> On Mon, Jul 01, 2013 at 09:43:58AM -0600, Don Dugger wrote:
> >
> > 1. Ultimately, I want to remove the periodic capability update completely.
> > The better technique is to update compute node state when the state
> > changes, periodic updates are just extra overhead.
It is easy enough for a node to refresh its list of flavours upon change
too, so that's not really a blocker.
Since there is no notification given to anyone when a flavor is created or
deleted this is a problem. Given the compute nodes don't know when flavors
change they don't know when to do any sort of `refresh`.
> > 2. There's no concept of which nodes support which flavors so this is
> > a completely new infrastructure that would have to be added to the
> > scheduler.
Again that's not a problem. We're not aiming for the least work solution
here, we want the one that makes most sense from a design pov even if that
requires extra dev work.
I'm trying to apply Occam's razor here. Having the API return a complete list
of capabilities is the simplest solution. With that minor change there's no
need to add a new infrastructure to track nodes vs. flavors.
> > 3. There's no easy way for the compute node to know which flavors it
> > supports. It doesn't know which filters are enabled in the
scheduler
> > so it doesn't know which clauses of a flavor actually apply
(ignoring
> > that the compute node would now have to duplicate the filtering
> > mechanism from the scheduler even if it knew which filters were
> > enabled).
I don't think that's an issue either. When the node filters the list
of flavours, it is doing so based on criteria about what it is technically
capable of supporting at a hardware level. When the schedular is filtering
flavours it is doing so based on operational rules. The node doesn't need to
take over the filtering that the scedular currently does. They are complementary
filtering rules.
I'm not sure what you're trying to say here. All I'm trying to discover is
what the node is `technically capable of supporting at a hardware level'. Given
that info we can then create flavors that the sheduler can use to better utilize
the hardware resources available. Without duplicating the filtering work that
the scheduler is already doing there's no way for a compute node to know
whether it supports a flavor or not.
--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
n0ano(a)n0ano.com
Ph: 303/443-3786