MESSAGE
DATE | 2015-04-25 |
FROM | Ruben Safir
|
SUBJECT | Subject: [LIU Comp Sci] Fwd: Kernel Scheduling and wait queues
|
From owner-learn-outgoing-at-mrbrklyn.com Sat Apr 25 21:57:06 2015 Return-Path: X-Original-To: archive-at-mrbrklyn.com Delivered-To: archive-at-mrbrklyn.com Received: by mrbrklyn.com (Postfix) id DBE021610B1; Sat, 25 Apr 2015 21:57:05 -0400 (EDT) Delivered-To: learn-outgoing-at-mrbrklyn.com Received: by mrbrklyn.com (Postfix, from userid 28) id C1D6C161154; Sat, 25 Apr 2015 21:57:05 -0400 (EDT) Delivered-To: learn-at-nylxs.com Received: from mailbackend.panix.com (mailbackend.panix.com [166.84.1.89]) by mrbrklyn.com (Postfix) with ESMTP id 3D8801610B1; Sat, 25 Apr 2015 21:56:38 -0400 (EDT) Received: from [10.0.0.19] (www.mrbrklyn.com [96.57.23.82]) by mailbackend.panix.com (Postfix) with ESMTPSA id D25A612B04; Sat, 25 Apr 2015 21:56:37 -0400 (EDT) Message-ID: <553C45D4.5010400-at-panix.com> Date: Sat, 25 Apr 2015 21:56:36 -0400 From: Ruben Safir User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: hangout , learn-at-nylxs.com Subject: [LIU Comp Sci] Fwd: Kernel Scheduling and wait queues References: <20150422091305.302-at-kylheku.com> <87sibsays5.fsf-at-doppelsaurus.mobileactivedefense.com> <20150422110344.469-at-kylheku.com> <20150422160337.511-at-kylheku.com> In-Reply-To: <20150422160337.511-at-kylheku.com> X-Forwarded-Message-Id: <20150422091305.302-at-kylheku.com> <87sibsays5.fsf-at-doppelsaurus.mobileactivedefense.com> <20150422110344.469-at-kylheku.com> <20150422160337.511-at-kylheku.com> Content-Type: multipart/mixed; boundary="------------070002090101080809000500" Sender: owner-learn-at-mrbrklyn.com Precedence: bulk Reply-To: learn-at-mrbrklyn.com
This is a multi-part message in MIME format. --------------070002090101080809000500 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit
--------------070002090101080809000500 Content-Type: message/rfc822; name="Kernel Scheduling and wait queues.eml" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="Kernel Scheduling and wait queues.eml"
Path: reader1.panix.com!panix!not-for-mail From: ruben safir Newsgroups: comp.unix.programmer Subject: Kernel Scheduling and wait queues Date: Wed, 22 Apr 2015 08:21:24 -0400 Organization: PANIX Public Access Internet and UNIX, NYC Message-ID: NNTP-Posting-Host: www.mrbrklyn.com Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Trace: reader1.panix.com 1429705284 12103 96.57.23.82 (22 Apr 2015 12:21:24 GMT) X-Complaints-To: abuse-at-panix.com NNTP-Posting-Date: Wed, 22 Apr 2015 12:21:24 +0000 (UTC) User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 Xref: panix comp.unix.programmer:234521
I'm just posting this because I'm hoping I can get some insight from the advanced coders here...
thanx
Ruben ~~~~~~~~~~~~~~~~~~~
Ruben QUOTED Previously:
<<Chapter 4 on the wait queue how it is implemented in the text completely surprised me.
He is recommending that you have to write your own wait queue entry routine for every process? Isn't that reckless?
He is suggesting
DEFINE_WAIT(wait) //what IS wait EXACTLY in this context
add_wait_queue(q, &wait); // in the current kernel this invovled // flag checking and a linked list
while(!condition){ /* an event we are weighting for prepare_to_wait(&q, &wait, TASK_INTERRUPTIBLE); if(signal_pending(current)) /* SIGNAl HANDLE */ schedule(); }
finish_wait(&q, &wait);
He also write how this proceeds to function and one part confuses me
5. When the taks awakens, it again checks whether the condition is true. If it is, it exists the loop. Otherwise it again calls schedule.
This is not the order that it seems to follow according to the code.
To me it looks like it should 1 - creat2 the wait queue 2 - adds &wait onto queue q 3 checks if condition is true, if so, if not, enter a while loop 4 prepare_to_wait which changes the status of our &wait to TASK_INTERUPPABLE 5 check for signals ... notice the process is still moving. Does it stop and wait now? 6 schedule itself on the runtime rbtree... which make NO sense unless there was a stopage I didn't know about. 7 check the condition again and repeat while look 7a. if the loop ends fishish_waiting... take it off the queue.
Isn't this reckless to leave this to users to write the code. Your begging for a race condition.
Ruben >>
On 04/21/2015 11:05 AM, michi1-at-michaelblizek.twilightparadox.com wrote: > Hi! > > On 12:39 Mon 20 Apr , Ruben Safir wrote: >> On 04/20/2015 11:23 AM, michi1-at-michaelblizek.twilightparadox.com wrote: >>> I would not recommend that. There are already functions in linux/wait.h for >>> these purposes like wait_event_interruptible(). >> >> >> can you do that in the kernel? The wait_event_interuptable creates wait >> queues? > > No, wait_event_interuptable waits on an existing waitqueue. If you want to > create a waitqueue, call init_waitqueue_head(). > > -Michi >
Here is the confusing part. this is a discussion on wait and semiphores in a standard text
5.6.2 Semaphore Implementation Recall that the implementation of mutex locks discussed in Section 5.5 suffers from busy waiting. The definitions of the wait() and signal() semaphore operations just described present the same problem. To overcome the need for busy waiting, we can modify the definition of the wait() and signal() operations as follows: When a process executes the wait() operation and finds that the semaphore value is not positive, it must wait. However, rather than engaging in busy waiting, the process can block itself. The block operation places a process into a waiting queue associated with the semaphore, and the state of the process is switched to the waiting state. Then control is transferred to the CPU scheduler, which selects another process to execute.
A process that is blocked, waiting on a semaphore S, should be restarted when some other process executes a signal() operation. The process is restarted by a wakeup() operation, which changes the process from the waiting state to the ready state. The process is then placed in the ready queue. (The CPU may or may not be switched from the running process to the newly ready process, depending on the CPU-scheduling algorithm.)
To implement semaphores under this definition, we define a semaphore as follows:
typedef struct { int value; struct process *list; } semaphore;
Each semaphore has an integer value and a list of processes list. When a process must wait on a semaphore, it is added to the list of processes. A signal() operation removes one process from the list of waiting processes and awakens that process. Now, the wait() semaphore operation can be defined as
wait(semaphore *S) { S->value--; if (S->value < 0) { add this process to S->list; block(); } }
and the signal() semaphore operation can be defined as
signal(semaphore *S) { S->value++; if (S->value <= 0) { remove a process P from S->list; wakeup(P); } }
Minus the Semiphore, that sounds like what we are doing with the wait list in the scheduler. But it looks like we are leaving it to the user. Why? It is similar but oddly different so I'm trying to figure out what is happening here.
Ruben
_______________________________________________ Kernelnewbies mailing list Kernelnewbies-at-kernelnewbies.org http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
--------------070002090101080809000500 Content-Type: message/rfc822; name="Re: Kernel Scheduler and wiat queues.eml" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="Re: Kernel Scheduler and wiat queues.eml"
Path: reader1.panix.com!panix!goblin3!goblin2!goblin.stu.neva.ru!aioe.org!.POSTED!not-for-mail From: Kaz Kylheku Newsgroups: comp.unix.programmer Subject: Re: Kernel Scheduler and wiat queues Date: Wed, 22 Apr 2015 16:31:25 +0000 (UTC) Organization: Aioe.org NNTP Server Message-ID: <20150422091305.302-at-kylheku.com> References: NNTP-Posting-Host: OGJi3KNpFOhM58UHZwXj0w.user.speranza.aioe.org X-Complaints-To: abuse-at-aioe.org User-Agent: slrn/pre1.0.0-18 (Linux) X-Notice: Filtered by postfilter v. 0.8.2 Xref: panix comp.unix.programmer:234522
On 2015-04-22, ruben safir wrote: > DEFINE_WAIT(wait) //what IS wait EXACTLY in this context
A preprocessor token, used as the argument to a macro call.
To understand what it means, you must look at the macro expansion.
What the macro does is generate code to define a variable whose name is "wait". It is abstracted away because, possibly, the variable is not only defined, but requires initialization. It requires initialization in ways that could change as the kernel is maintained, so it needs to be abstract.
> add_wait_queue(q, &wait); // in the current kernel this invovled > // flag checking and a linked list > > while(!condition){ /* an event we are weighting for > prepare_to_wait(&q, &wait, TASK_INTERRUPTIBLE); > if(signal_pending(current)) > /* SIGNAl HANDLE */ > schedule(); > } > > finish_wait(&q, &wait); > > He also write how this proceeds to function and one part confuses me > > 5. When the taks awakens, it again checks whether the condition is > true. If it is, it exists the loop. Otherwise it again calls schedule.
This is just the consequence of the code you have before you. When the task awakens it returns out of the schedule() call, and continues with the loop.
> This is not the order that it seems to follow according to the code. > > To me it looks like it should > 1 - creat2 the wait queue
You really mean "wait queue node". The wait queue already exists, but the task needs a queuing data structure so that it can add itself to that queue.
(Why does a task need an extra structure for this rather than just some link pointers inside its task structure. The answer is that a task can wait on multiple wait queues queues at the same time:
DEFINE_WAIT(node0); DEFINE_WAIT(node1); /* ... */ DEFINE_WAIT(nodeN);
add_wait_queue(q0, &node0); add_wait_queue(q1, &node1); /* ... */ add_wait_queue(q1, &nodeN);
while (!condition) { prepare_to_wait(&q0, &node0, TASK_INTERRUPTIBLE); prepare_to_wait(&q1, &node1, TASK_INTERRUPTIBLE); /* ... */ prepare_to_wait(&qn, &nodeN, TASK_INTERRUPTIBLE);
if(signal_pending(current)) { /* SIGNAl HANDLE */ }
schedule(); }
finish_wait(&q0, &node0); finish_wait(&q1, &node1); /* ... */ finish_wait(&qn, &nodeN);
> 5 check for signals ... notice the process is still moving. Does it > stop and wait now?
No, it stops and waits somewhere inside the schedule() function, and only there. Elsewhere, the code is running.
> 6 schedule itself on the runtime rbtree... which make NO sense unless > there was a stopage I didn't know about.
It makes perfect sense. The task changes its status to "interruptible sleep", and then informs the scheduler, by calling the schedule() function. The scheduler puts that into effect, and removes the task from the run queue. Then it makes a scheduling decision to dispatch a task. The task which called it is not eligible for execution now, so another task is chosen.
> Isn't this reckless to leave this to users to write the code. Your > begging for a race condition.
Yes, of course you are absolutely right; most kernel developers should not be writing the above open code most of the time. The best justification for it is that you're developing a new synchronization primitive (which itself requires justification as to why the existing ones aren't good enough).
There are higher level primitives which wrap the above logic: semaphores, read-write semaphores, mutexes, events, poll, ...
For instance, look for wait_event, wait_event_interruptible, and related macros.
--------------070002090101080809000500 Content-Type: message/rfc822; name="Re: Kernel Scheduler and wiat queues.eml" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="Re: Kernel Scheduler and wiat queues.eml"
Path: reader1.panix.com!panix!goblin3!goblin.stu.neva.ru!bolzen.all.de!fu-berlin.de!uni-berlin.de!individual.net!not-for-mail From: Rainer Weikusat Newsgroups: comp.unix.programmer Subject: Re: Kernel Scheduler and wiat queues Date: Wed, 22 Apr 2015 17:55:06 +0100 Message-ID: <87sibsays5.fsf-at-doppelsaurus.mobileactivedefense.com> References: <20150422091305.302-at-kylheku.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Trace: individual.net HNSRaNth1zP/WQ5XpOOARwUPhN6pW+FgmsMxAm7H/2tNPOD+k= Cancel-Lock: sha1:1KB3kgqkkr0Bkv0gGl9sQmE8FnE= sha1:+g4tgBpuw3aST93HN5A71CajRIA= User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/23.4 (gnu/linux) Xref: panix comp.unix.programmer:234523
Kaz Kylheku writes: > On 2015-04-22, ruben safir wrote:
[...]
>> 6 schedule itself on the runtime rbtree... which make NO sense unless >> there was a stopage I didn't know about. > > It makes perfect sense. The task changes its status to "interruptible sleep", > and then informs the scheduler, by calling the schedule() function. > The scheduler puts that into effect, and removes the task from the run > queue.
schedule() is the top-level entry point of the scheduler, so, a more accurate description would be "it calls into the scheduler in order to make that select some task to run". Maybe a higher priority task than the one which called schedule is runnable. In this case, the higher priority task will run instead of the one which called schedule. Otherwise, this task will continue to run unless it changed its state to something other than 'runnable' first.
>> Isn't this reckless to leave this to users to write the code. Your >> begging for a race condition. > > Yes, of course you are absolutely right; most kernel developers should not be > writing the above open code most of the time. The best justification for > it is that you're developing a new synchronization primitive (which itself > requires justification as to why the existing ones aren't good > enough).
I completely disagree with this: Just because more or less useful and more or less buggy of bug-free 'special-case wait for this, that or something else' code keeps piling up in in include/linux/wait.h doesn't mean anyone ought to be required to read through all of this whenever some kernel code has to wait for something to select the currently least worse bit of infrastructure code from there, considering that the simple solution of just coding the wait loop will work for every situation and is all over the kernel for historical reasons, anyway.
Some people can't ever do anything without producing a "general purpose library" (or a couple of these) [or can't ever do anything except that]. Considering this obvious lack of practically applicable skills, no code deserves to be used just because someone wrote it.
--------------070002090101080809000500 Content-Type: message/rfc822; name="Re: Kernel Scheduler and wiat queues.eml" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="Re: Kernel Scheduler and wiat queues.eml"
Path: reader1.panix.com!panix!goblin2!goblin.stu.neva.ru!aioe.org!.POSTED!not-for-mail From: Kaz Kylheku Newsgroups: comp.unix.programmer Subject: Re: Kernel Scheduler and wiat queues Date: Wed, 22 Apr 2015 18:20:48 +0000 (UTC) Organization: Aioe.org NNTP Server Message-ID: <20150422110344.469-at-kylheku.com> References: <20150422091305.302-at-kylheku.com> <87sibsays5.fsf-at-doppelsaurus.mobileactivedefense.com> NNTP-Posting-Host: OGJi3KNpFOhM58UHZwXj0w.user.speranza.aioe.org X-Complaints-To: abuse-at-aioe.org User-Agent: slrn/pre1.0.0-18 (Linux) X-Notice: Filtered by postfilter v. 0.8.2 Xref: panix comp.unix.programmer:234524
On 2015-04-22, Rainer Weikusat wrote: > Kaz Kylheku writes: >> On 2015-04-22, ruben safir wrote: >>> Isn't this reckless to leave this to users to write the code. Your >>> begging for a race condition. >> >> Yes, of course you are absolutely right; most kernel developers should not be >> writing the above open code most of the time. The best justification for >> it is that you're developing a new synchronization primitive (which itself >> requires justification as to why the existing ones aren't good >> enough). > > I completely disagree with this: Just because more or less useful and > more or less buggy of bug-free 'special-case wait for this, that or > something else' code keeps piling up in in include/linux/wait.h doesn't > mean anyone ought to be required to read through all of this whenever > some kernel code has to wait for something to select the currently least > worse bit of infrastructure code from there, considering that the simple > solution of just coding the wait loop will work for every situation and > is all over the kernel for historical reasons, anyway.
I'm the opposite. Give me a large number of primitives to choose from!
By the way, I used mutexes and condition variables in the kernel back in 1998, before any such a thing existed in the kernel, and made a little package of that. It supported a driver I was working on:
http://www.kylheku.com/~kaz/lmc.html
A few years ago, on another job, I also wrote a "super poll function". With this bugger, you could, simultaneously:
- wait on a set of sockets (struct socket *) - wait on a set of files (struct file *) - wait on a condition variable - with a timeout
with this mega-function, I multiplexed a ton of functionality onto a single kernel thread.
--------------070002090101080809000500 Content-Type: message/rfc822; name="Re: Kernel Scheduler and wiat queues.eml" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="Re: Kernel Scheduler and wiat queues.eml"
Path: reader1.panix.com!panix!goblin3!goblin2!goblin.stu.neva.ru!feeder.erje.net!1.eu.feeder.erje.net!news2.arglkargh.de!news.mixmin.net!aioe.org!.POSTED!not-for-mail From: Kaz Kylheku Newsgroups: comp.unix.programmer Subject: Re: Kernel Scheduler and wiat queues Date: Wed, 22 Apr 2015 23:04:32 +0000 (UTC) Organization: Aioe.org NNTP Server Message-ID: <20150422160337.511-at-kylheku.com> References: <20150422091305.302-at-kylheku.com> <87sibsays5.fsf-at-doppelsaurus.mobileactivedefense.com> <20150422110344.469-at-kylheku.com> NNTP-Posting-Host: OGJi3KNpFOhM58UHZwXj0w.user.speranza.aioe.org X-Complaints-To: abuse-at-aioe.org User-Agent: slrn/pre1.0.0-18 (Linux) X-Notice: Filtered by postfilter v. 0.8.2 Xref: panix comp.unix.programmer:234525
On 2015-04-22, Kaz Kylheku wrote: > By the way, I used mutexes and condition variables in the kernel back in 1998, > before any such a thing existed in the kernel, and made a little package of > that. It supported a driver I was working on: > > http://www.kylheku.com/~kaz/lmc.html > > A few years ago, on another job, I also wrote a "super poll function". > With this bugger, you could, simultaneously: > > - wait on a set of sockets (struct socket *) > - wait on a set of files (struct file *) > - wait on a condition variable > - with a timeout
I just noticed that I had added that function to the above "LMC" project, haha.
--------------070002090101080809000500--
|
|