Index of /archives/linux/kernel.org/kernel/people/rml/sched/for_alan

Icon  Name                                                Last modified      Size  Description
[PARENTDIR] Parent Directory - [TXT] 110-remove-wake-up-sync.patch 2002-04-21 08:40 5.3K [TXT] 120-need_resched-abstraction.patch 2002-04-21 08:40 2.4K [TXT] 130-frozen-lock.patch 2002-04-21 08:40 2.3K [TXT] 140-sched_yield.patch 2002-04-21 08:40 1.4K [   ] 145-more-sched_yield.patch 2002-05-08 03:31 4.0K [   ] 150-need_resched-check.patch 2002-04-21 08:40 715 [TXT] 160-maxrtprio-1.patch 2002-04-21 08:40 2.1K [TXT] 165-maxrtprio.patch 2002-05-08 03:31 2.1K [TXT] 166-maxrtprio.patch 2002-05-31 08:37 3.2K [TXT] 170-migration_thread.patch 2002-04-21 08:40 12K [TXT] 175-updated-migration_init.patch 2002-05-08 03:31 5.1K [TXT] 180-misc-stuff.patch 2002-04-21 08:40 5.4K [TXT] 185-more-misc-stuff.patch 2002-05-31 01:26 3.8K [TXT] 190-documentation.patch 2002-05-31 01:26 13K [TXT] 200-sched-yield.patch 2002-09-12 07:02 3.5K [TXT] 210-sched-comments.patch 2002-09-12 07:02 12K [TXT] 220-task_cpu.patch 2002-09-12 07:02 5.2K [TXT] 230-sched-misc.patch 2002-09-12 07:02 4.9K [TXT] README 2002-09-12 07:09 4.0K [   ] sha256sums.asc 2023-04-26 06:12 2.5K
Against 2.4.19-pre7-ac2

145, 165, and 175 are against 2.4.19-pre7-ac4.

166, 185, and 190 are against 2.4.19-pre8-ac5

200, 210, 220, and 230 are against 2.4.20-pre5-ac4

110-remove-wake-up-sync.patch

	We do not need sync wakeups anymore, as the load balancer handles
	the case fine.  Remove wake_up_sync and friends and the sync flag
	in the __wake_up method.

120-need_resched-abstraction.patch

	Abstract away access to need_resched into set_need_resched, etc.

130-frozen-lock.patch

	Fix scheduler deadlock on some platforms.  I'll let DaveM (the author)
	explain:

	Some platforms need to grab mm->page_table_lock during switch_mm().
	On the other hand code like swap_out() in mm/vmscan.c needs to hold
	mm->page_table_lock during wakeups which needs to grab the runqueue
	lock.  This creates a conflict and the resolution chosen here is to
	not hold the runqueue lock during context_switch().
	
	The implementation is specifically a "frozen" state implemented as a
	spinlock, which is held around the context_switch() call.  This allows
	the runqueue lock to be dropped during this time yet prevent another cpu
	from running the "not switched away from yet" task.

140-sched_yield.patch

	Optimize sched_yield.

145-more-sched_yield.patch

	More abstractions to yield()

150-need_resched-check.patch

	A new task can become runnable during schedule().  We always want to
	return from scheduler with the highest priority task running, so we
	should check need_resched before returning to see if we should rerun
	ourselves through schedule.  This used to be in the scheduler but was
	removed and then readded.

160-maxrtprio-1.patch

	Cleanup assumptions over what the value of MAX_RT_PRIO.  No change to
	object code; just replace magic numbers with defines.

165-maxrtprio.patch

	Separate notion of "maximum real-time priority" from "maximum
	user-space real-time priority" via MAX_RT_PRIO vs MAX_USER_RT_PRIO
	defines.

166-maxrtprio.patch

	Further cleanup the code and move the defines to sched.h

170-migration_thread.patch

	Backport of the migration_thread migration code from 2.5.  This
	includes my interrupt-off bugfix and wli's new migration_init code.
	The migration_thread code allows arch-independent task migration
	via set_cpus_allowed and allows the creation of things like task cpu
	affinity interfaces.

175-updated_migration_init.patch

	Rewrite of migration_init using Erich Focht's simpler method of using
	the initial migration_thread to migrate any future threads.  Also
	includes a fix for arches where logical != physical CPU mapping.

180-misc-stuff.patch

	Lots of misc stuff, almost entiirely invariant and trivial cleanups.
	Specifically:

	- rename lock_task_rq -> task_rq_lock
	- rename unlock_task_rq -> task_rq_lock
	- cleanup lock_task_rq
	- list_del_init -> list_del fix in dequeue_task
	- comment cleanups and additions
	- load_balance fixes and cleanups
	- simple optimization (rt_task -> policy!=SCHED_OTHER)

185-more-misc-stuff.patch

	More misc. cleanups and improvements:

        - move sched_find_first_bit from mmu_context.h to bitops.h
          as in 2.5.  Why it was ever in mmu_context.h is beyond me.
        - remove the RUN_CHILD_FIRST cruft from kernel/fork.c.
          Pretty clear this works great; we do not need the ifdefs.
        - Add comments to top of kernel/sched.c to briefly explain
          new scheduler design, give credit, and update copyright.
        - set_cpus_allowed optimization from Mike Kravetz: we do not
          need to invoke the migration_thread's if the task is not
          running; just update task->cpu.

190-documentation.patch

	Add sched-coding.txt, a description of scheduler methods and
	locking rules, and sched-design.txt, Ingo's original lkml
	email detailing the goals, design, and implementation of the
	scheduler.

200-sched-yield.patch

	Fix sched_yield for good.  Seriously.

210-sched-comments.patch

	Glorious comments everywhere.

220-task_cpu.patch

	"set_task_cpu()" and "task_cpu()" abstraction.

230-sched-misc.patch

	misc. and trivial cleanups