linux-kernel-digest V1 #113
Steve Underwood
steveu en coppice.org
Vie Ene 28 14:29:28 CST 2000
Robert Dinse wrote:
> On Thu, 27 Jan 2000 Larry McVoy <lm en bitmover.com> wrote:
> >
> > : Cache problems are important, but so is effective code and good
> > : algorithms. Of course, it's a pretty secure stance - can't add any new
> > : things because they will increase cache misses. But an extra kilobyte
> > : (probably less) isn't going to matter.
> >
> > The hell it doesn't. If I have an application that manages to the
> > working set in the on chip cache, and you increase the kernel's
> > use of that on chip cache by 1K, you just blew the application out
> > of the cache.
>
> I hate to jump into this, but I also hate to see this scheduler idea
> ditched for invalid reasons.
>
> The code isn't going to get into the cache and displace your application
> just because it exists in the kernel. To get into the cache it actually has to
> be fetched.
>
> Since the scheduler is adaptive, the new code, other than the test, is
> only going to executed under the heavy load conditions. Surely the test isn't
> 1kbyte or anywhere close to it?
A partially good point, but flawed.
In the reasonable load case most of the new code won't necessarily get into the
cache, so it should have little impact.
Now consider the high load case. A high load means the machine is being crippled
by multiple number crunching activities creating a compute requirement beyond the
system's reasonable means. This is a situation where the best thing is to run the
heavy compute jobs one by one (or one-per-processor by one-per-processor). This
minimises cache sloshing, and lets the processor (or processors) run to its (or
their) maximum potential. The total delay from the start to the end of the jobs
will be minimised. This may not always be practical - multiple users will often
fight to get their jobs to run, despite the fact that everyone actually looses.
How best could the system cope with this? Well, probably by shutting down the
scheduler for a while, so the jobs do run one by one. However lumpy interactive
performance may get by doing this, the total execution time is minimised, so the
average pain caused to people waiting should also be minimised.
With modern processor performance being so critically dependant on cache
performance (a trend that will continue, at least in the near term) there is
probably an argument for investigating a strategy like this. One that totally
abandons the system's normal "fair share" practices until the workload drops to a
more reasonable level. Provision must be made for those cases where the apparently
heavy compute requirement is simply caused by one or more runaway tasks stuck in a
loop. It needs careful thought, but I see some potential here for improved
performance under high compute loads. All that a slightly faster scheduler will do
is help the cache slosh faster.
Steve
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo en vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
Más información sobre la lista de distribución Ayuda