MESSAGE
DATE | 2015-04-22 |
FROM | Ruben Safir
|
SUBJECT | Subject: [LIU Comp Sci] Fwd: Re: wait queues semiphores kernel implementations
|
From owner-learn-outgoing-at-mrbrklyn.com Wed Apr 22 13:28:15 2015 Return-Path: X-Original-To: archive-at-mrbrklyn.com Delivered-To: archive-at-mrbrklyn.com Received: by mrbrklyn.com (Postfix) id 3D6A916116D; Wed, 22 Apr 2015 13:28:15 -0400 (EDT) Delivered-To: learn-outgoing-at-mrbrklyn.com Received: by mrbrklyn.com (Postfix, from userid 28) id 30A85161174; Wed, 22 Apr 2015 13:28:15 -0400 (EDT) Delivered-To: learn-at-nylxs.com Received: from mailbackend.panix.com (mailbackend.panix.com [166.84.1.89]) by mrbrklyn.com (Postfix) with ESMTP id 9976416116D for ; Wed, 22 Apr 2015 13:27:51 -0400 (EDT) Received: from [10.0.0.19] (www.mrbrklyn.com [96.57.23.82]) by mailbackend.panix.com (Postfix) with ESMTPSA id EBC51129B0; Wed, 22 Apr 2015 13:27:50 -0400 (EDT) Message-ID: <5537DA16.10702-at-panix.com> Date: Wed, 22 Apr 2015 13:27:50 -0400 From: Ruben Safir User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: learn-at-nylxs.com Subject: [LIU Comp Sci] Fwd: Re: wait queues semiphores kernel implementations References: <20150422164913.GA4470-at-grml> In-Reply-To: <20150422164913.GA4470-at-grml> X-Forwarded-Message-Id: <20150422164913.GA4470-at-grml> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Sender: owner-learn-at-mrbrklyn.com Precedence: bulk Reply-To: learn-at-mrbrklyn.com
-------- Forwarded Message -------- Subject: Re: wait queues semiphores kernel implementations Date: Wed, 22 Apr 2015 18:49:13 +0200 From: michi1-at-michaelblizek.twilightparadox.com To: Ruben Safir CC: kernelnewbies-at-kernelnewbies.org
Hi!
On 07:23 Wed 22 Apr , Ruben Safir wrote: > Ruben QUOTED Previously: > > <<> Chapter 4 on the wait queue how it is implemented in the text > completely surprised me. > > He is recommending that you have to write your own wait queue entry > routine for every process? Isn't that reckless? > > He is suggesting > > DEFINE_WAIT(wait) //what IS wait EXACTLY in this context
#define DEFINE_WAIT_FUNC(name, function) \ wait_queue_t name = { \ .private = current, \ .func = function, \ .task_list = LIST_HEAD_INIT((name).task_list), \ }
#define DEFINE_WAIT(name) DEFINE_WAIT_FUNC(name, autoremove_wake_function)
> add_wait_queue(q, &wait); // in the current kernel this invovled > // flag checking and a linked list > > while(!condition){ /* an event we are weighting for > prepare_to_wait(&q, &wait, TASK_INTERRUPTIBLE); > if(signal_pending(current)) > /* SIGNAl HANDLE */ > schedule(); > } > > finish_wait(&q, &wait); > > He also write how this proceeds to function and one part confuses me > > 5. When the taks awakens, it again checks whether the condition is > true. If it is, it exists the loop. Otherwise it again calls schedule. > > > This is not the order that it seems to follow according to the code. > > To me it looks like it should > 1 - creat2 the wait queue > 2 - adds &wait onto queue q > 3 checks if condition is true, if so, if not, enter a while loop > 4 prepare_to_wait which changes the status of our &wait to > TASK_INTERUPPABLE > 5 check for signals ... notice the process is still moving. Does it > stop and wait now? > 6 schedule itself on the runtime rbtree... which make NO sense unless > there was a stopage I didn't know about. > 7 check the condition again and repeat while look > 7a. if the loop ends fishish_waiting... take it off the queue.
This is what wait_event_interruptable looks like: http://lxr.linux.no/linux+*/include/linux/wait.h#L390
Seems like prepare_to_wait is now called before checking the condition and add_wait_queue does not exist anymore.
> Isn't this reckless to leave this to users to write the code. Your > begging for a race condition.
I agree. This is why I would not recommend it unless you have a good reason to do so.
... > Minus the Semiphore, that sounds like what we are doing with the wait > list in the scheduler. But it looks like we are leaving it to the > user. Why? It is similar but oddly different so I'm trying to figure > out what is happening here.
The concept behind a waitqueue is more not about counting up+down. Basically when you call wait_event_* you define what you are waiting for. For example you have a socket and want to wait incoming data. Wheneven anything happens to the socket (e.g. data arrives, error, ...), somebody calls wake_up, your thread makes up, check if the condition is true and then wait_event_* either goes back to sleep or returns.
The difference is that you can have situations where wait_event_* returns without anybody even having called wake_up. Also you can have situations with lots of calls to wake_up, but wait_event_* always goes back to sleep because the events which happen do not cause your condition to become true.
-Michi -- programing a layer 3+4 network protocol for mesh networks see http://michaelblizek.twilightparadox.com
|
|