Interesting analysis of linux kernel threading by IBM

Chris McClellen cmcclellen en weather.com
Sab Ene 22 09:57:04 CST 2000



>>>> Mark Hahn <hahn en coffee.psychology.mcmaster.ca> 01/21 9:11 AM >>>
>> there are not only IO-bound servers around in the world.
>> There are a lot of computing intensive high threaded applications.
>> The fact is that here we can satisfy both.

>this is not the issue.  the issue is that volanomark is specifically
>designed to create unrealistically long runqueues.  the question is 
>whether there are real applications (NOT volanomark in loopback mode)
>that have many thread which only synchronize with themselves, and 
>do not spend most of their time blocked on IO.

>the normal case is short runqueues, so a low-latency scheduler 
>is the right optimization choice.  if it becomes clear that some 
>real apps are causing long runqueues, we can add code.

I have remember reading in several books, papers, comments and
emails over the years:  optimize the critical path.  If the current
scheduler is that optimization -- great.  If its not and you truly do
have a runqueue length of say.. 50+ alot, perhaps you can
get Mingo to optimize your critical path =).  

However Larry McVoy and others have stated some good cases
for doing things a different way rather than poking around at
potentially dangerous "optimizations."

I remember the days when I worked for a company that got some
of the first sparc1000's ever produced.  (CPU0 went dead on one of
them a few days later, dispelling the myth of autoconfiguring around
dead processors.  If CPU0 dies all bets are off and the sys wouldnt
boot).  We had Solaris 2.1 no patches.  Oh the joy.  There were some
mighty funny bugs.  Our business was important enough to Sun that
they flew in kernel developers (one of the few times I have ever met
white-paper authors).

At any rate, after solving wrong-context problems (under heavy disk i/o
load, processes that were interrupted were resumed with the wrong
context due to 2 lines of code that were in the wrong order), time
went on, and Georgia Tech started to turn towards using the 1000's/2000's
to have the "general population of students."  I was also a student at the
time @ GaTech, so I had the unique "privilege" of seeing solaris have problems
in our company and at GaTech.  (Anyone here from gatech remember the early days
of acmex, etc?).

Anyway, I learned at my job that because of the performance problems
GaTech was experiencing, and the way gatech was layed out (terminal
servers telnetting into the sparcs), they were going to put telnetd in the kernel.
Ponder that folks.  This was the #1 solution they were going to do for some
time.  Turns out they eventually found a way to fix the problems
(that is, release 2.2 and a LOT of patches), and telnetd didnt get into
the kernel.  They were going to make this change based on one customer's
usage of Solaris & SC1000/2000.  Bad idea, glad they canned it.

I think it will be interesting to look at Larry's suggestions for a linux
that scales (there is the linux-scalability project... hope larry is in with
them) well to machines with many processors.  But I think potentially
not optimizing the critical path is bad.  Reduce critical path perf
so that is an edge case can run a better?  Load avg of 144 on a Uniprocessor
at say 600mhz?  Hmm.  Not good.  A good scheduler still has to split that 600
mhz speed up amongst 144 processes.  Even if the scheduler lowers its overhead
so that its O(1) to pick a process, you still will take a long time servicing those
processes to completion (if they stay runnable for long periods of time).  I wouldn't
expect reasonable turnaround time on those 144 procs =).  But if Mingo thinks
his sched can magically make my sustained rqlen of say 50+ run like the wind,
great!  But sustained rql of high numbers can't be good for many many reasons.



Chris McClellen
"I don't need no stinkin' DSM"






-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo en vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



Más información sobre la lista de distribución Ayuda