diff -Nru a/Documentation/00-INDEX b/Documentation/00-INDEX
--- a/Documentation/00-INDEX Tue Mar 4 19:30:05 2003
+++ b/Documentation/00-INDEX Tue Mar 4 19:30:05 2003
@@ -56,6 +56,8 @@
- info on Computone Intelliport II/Plus Multiport Serial Driver
cpqarray.txt
- info on using Compaq's SMART2 Intelligent Disk Array Controllers.
+cpufreq/
+ - info on CPU frequency and voltage scaling
cris/
- directory with info about Linux on CRIS architecture.
devices.txt
diff -Nru a/Documentation/DocBook/Makefile b/Documentation/DocBook/Makefile
--- a/Documentation/DocBook/Makefile Tue Mar 4 19:30:08 2003
+++ b/Documentation/DocBook/Makefile Tue Mar 4 19:30:08 2003
@@ -11,7 +11,7 @@
kernel-locking.sgml via-audio.sgml mousedrivers.sgml \
deviceiobook.sgml procfs-guide.sgml tulip-user.sgml \
writing_usb_driver.sgml scsidrivers.sgml sis900.sgml \
- kernel-api.sgml journal-api.sgml lsm.sgml
+ kernel-api.sgml journal-api.sgml lsm.sgml usb.sgml
###
# The build process is as follows (targets):
diff -Nru a/Documentation/DocBook/deviceiobook.tmpl b/Documentation/DocBook/deviceiobook.tmpl
--- a/Documentation/DocBook/deviceiobook.tmpl Tue Mar 4 19:30:08 2003
+++ b/Documentation/DocBook/deviceiobook.tmpl Tue Mar 4 19:30:08 2003
@@ -152,7 +152,7 @@
While the basic functions are defined to be synchronous with respect
to each other and ordered with respect to each other the busses the
- devices sit on may themselves have asynchronocity. In paticular many
+ devices sit on may themselves have asynchronicity. In particular many
authors are burned by the fact that PCI bus writes are posted
asynchronously. A driver author must issue a read from the same
device to ensure that writes have occurred in the specific cases the
diff -Nru a/Documentation/DocBook/kernel-api.tmpl b/Documentation/DocBook/kernel-api.tmpl
--- a/Documentation/DocBook/kernel-api.tmpl Tue Mar 4 19:30:03 2003
+++ b/Documentation/DocBook/kernel-api.tmpl Tue Mar 4 19:30:03 2003
@@ -228,102 +228,6 @@
-->
-
- USB Devices
-
- Drivers for USB devices talk to the "usbcore" APIs, and are
- exposed through driver frameworks such as block, character,
- or network devices.
- There are two types of public "usbcore" APIs: those intended for
- general driver use, and those which are only public to drivers that
- are part of the core.
- The drivers that are part of the core are involved in managing a USB bus.
- They include the "hub" driver, which manages trees of USB devices, and
- several different kinds of "host controller" driver (HCD), which control
- individual busses.
-
-
- The device model seen by USB drivers is relatively complex.
-
-
-
-
- USB supports four kinds of data transfer
- (control, bulk, interrupt, and isochronous). Two transfer
- types use bandwidth as it's available (control and bulk),
- while the other two types of transfer (interrupt and isochronous)
- are scheduled to provide guaranteed bandwidth.
-
-
- The device description model includes one or more
- "configurations" per device, only one of which is active at a time.
-
-
- Configurations have one or more "interface", each
- of which may have "alternate settings". Interfaces may be
- standardized by USB "Class" specifications, or may be specific to
- a vendor or device.
-
- USB device drivers actually bind to interfaces, not devices.
- Think of them as "interface drivers", though you
- may not see many devices where the distinction is important.
- Most USB devices are simple, with only one configuration,
- one interface, and one alternate setting.
-
-
- Interfaces have one or more "endpoints", each of
- which supports one type and direction of data transfer such as
- "bulk out" or "interrupt in". The entire configuration may have
- up to sixteen endpoints in each direction, allocated as needed
- among all the interfaces.
-
-
- Data transfer on USB is packetized; each endpoint
- has a maximum packet size.
- Drivers must often be aware of conventions such as flagging the end
- of bulk transfers using "short" (including zero length) packets.
-
-
- The Linux USB API supports synchronous calls for
- control and bulk messaging.
- It also supports asynchnous calls for all kinds of data transfer,
- using request structures called "URBs" (USB Request Blocks).
-
-
-
-
- Accordingly, the USB Core API exposed to device drivers
- covers quite a lot of territory. You'll probably need to consult
- the USB 2.0 specification, available online from www.usb.org at
- no cost, as well as class or device specifications.
-
-
- Data Types and Macros
-!Iinclude/linux/usb.h
-
-
- USB Core APIs
-!Edrivers/usb/core/urb.c
-
-!Edrivers/usb/core/message.c
-!Edrivers/usb/core/file.c
-!Edrivers/usb/core/usb.c
-
-
- Host Controller APIs
- These APIs are only for use by host controller drivers,
- most of which implement standard register interfaces such as
- EHCI, OHCI, or UHCI.
-
-!Edrivers/usb/core/hcd.c
-!Edrivers/usb/core/hcd-pci.c
-!Edrivers/usb/core/buffer.c
-
-
-
-
16x50 UART Driver
!Edrivers/serial/core.c
diff -Nru a/Documentation/DocBook/parportbook.tmpl b/Documentation/DocBook/parportbook.tmpl
--- a/Documentation/DocBook/parportbook.tmpl Tue Mar 4 19:30:04 2003
+++ b/Documentation/DocBook/parportbook.tmpl Tue Mar 4 19:30:04 2003
@@ -1149,7 +1149,7 @@
peripheral what transfer mode it would like to use, and the
peripheral either accepts that mode or rejects it; if the mode is
rejected, the host can try again with a different mode. This is
- the negotation phase. Once the peripheral has accepted a
+ the negotiation phase. Once the peripheral has accepted a
particular transfer mode, data transfer can begin that mode.
diff -Nru a/Documentation/DocBook/usb.tmpl b/Documentation/DocBook/usb.tmpl
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/Documentation/DocBook/usb.tmpl Tue Mar 4 19:30:14 2003
@@ -0,0 +1,294 @@
+
+
+
+ The Linux-USB Host Side API
+
+
+
+ This documentation is free software; you can redistribute
+ it and/or modify it under the terms of the GNU General Public
+ License as published by the Free Software Foundation; either
+ version 2 of the License, or (at your option) any later
+ version.
+
+
+
+ This program is distributed in the hope that it will be
+ useful, but WITHOUT ANY WARRANTY; without even the implied
+ warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ See the GNU General Public License for more details.
+
+
+
+ You should have received a copy of the GNU General Public
+ License along with this program; if not, write to the Free
+ Software Foundation, Inc., 59 Temple Place, Suite 330, Boston,
+ MA 02111-1307 USA
+
+
+
+ For more details see the file COPYING in the source
+ distribution of Linux.
+
+
+
+
+
+
+
+ Introduction to USB on Linux
+
+ A Universal Serial Bus (USB) is used to connect a host,
+ such as a PC or workstation, to a number of peripheral
+ devices. USB uses a tree structure, with the host at the
+ root (the system's master), hubs as interior nodes, and
+ peripheral devices as leaves (and slaves).
+ Modern PCs support several such trees of USB devices, usually
+ one USB 2.0 tree (480 Mbit/sec each) with
+ a few USB 1.1 trees (12 Mbit/sec each) that are used when you
+ connect a USB 1.1 device directly to the machine's "root hub".
+
+
+ That master/slave asymmetry was designed in part for
+ ease of use. It is not physically possible to assemble
+ (legal) USB cables incorrectly: all upstream "to-the-host"
+ connectors are the rectangular type, matching the sockets on
+ root hubs, and the downstream type are the squarish type
+ (or they are built in to the peripheral).
+ Software doesn't need to deal with distributed autoconfiguration
+ since the pre-designated master node manages all that.
+ At the electrical level, bus protocol overhead is reduced by
+ eliminating arbitration and moving scheduling into host software.
+
+
+ USB 1.0 was announced in January 1996, and was revised
+ as USB 1.1 (with improvements in hub specification and
+ support for interrupt-out transfers) in September 1998.
+ USB 2.0 was released in April 2000, including high speed
+ transfers and transaction translating hubs (used for USB 1.1
+ and 1.0 backward compatibility).
+
+
+ USB support was added to Linux early in the 2.2 kernel series
+ shortly before the 2.3 development forked off. Updates
+ from 2.3 were regularly folded back into 2.2 releases, bringing
+ new features such as /sbin/hotplug support,
+ more drivers, and more robustness.
+ The 2.5 kernel series continued such improvements, and also
+ worked on USB 2.0 support,
+ higher performance,
+ better consistency between host controller drivers,
+ API simplification (to make bugs less likely),
+ and providing internal "kerneldoc" documentation.
+
+
+ Linux can run inside USB devices as well as on
+ the hosts that control the devices.
+ Because the Linux 2.x USB support evolved to support mass market
+ platforms such as Apple Macintosh or PC-compatible systems,
+ it didn't address design concerns for those types of USB systems.
+ So it can't be used inside mass-market PDAs, or other peripherals.
+ USB device drivers running inside those Linux peripherals
+ don't do the same things as the ones running inside hosts,
+ and so they've been given a different name:
+ they're called gadget drivers.
+ This document does not present gadget drivers.
+
+
+
+
+
+ USB Host-Side API Model
+
+ Host-side drivers for USB devices talk to the "usbcore" APIs.
+ There are two types of public "usbcore" APIs, targetted at two different
+ layers of USB driver. Those are
+ general purpose drivers, exposed through
+ driver frameworks such as block, character, or network devices;
+ and drivers that are part of the core,
+ which are involved in managing a USB bus.
+ Such core drivers include the hub driver,
+ which manages trees of USB devices, and several different kinds
+ of host controller driver (HCD),
+ which control individual busses.
+
+
+ The device model seen by USB drivers is relatively complex.
+
+
+
+
+ USB supports four kinds of data transfer
+ (control, bulk, interrupt, and isochronous). Two transfer
+ types use bandwidth as it's available (control and bulk),
+ while the other two types of transfer (interrupt and isochronous)
+ are scheduled to provide guaranteed bandwidth.
+
+
+ The device description model includes one or more
+ "configurations" per device, only one of which is active at a time.
+ Devices that are capable of high speed operation must also support
+ full speed configurations, along with a way to ask about the
+ "other speed" configurations that might be used.
+
+
+ Configurations have one or more "interface", each
+ of which may have "alternate settings". Interfaces may be
+ standardized by USB "Class" specifications, or may be specific to
+ a vendor or device.
+
+ USB device drivers actually bind to interfaces, not devices.
+ Think of them as "interface drivers", though you
+ may not see many devices where the distinction is important.
+ Most USB devices are simple, with only one configuration,
+ one interface, and one alternate setting.
+
+
+ Interfaces have one or more "endpoints", each of
+ which supports one type and direction of data transfer such as
+ "bulk out" or "interrupt in". The entire configuration may have
+ up to sixteen endpoints in each direction, allocated as needed
+ among all the interfaces.
+
+
+ Data transfer on USB is packetized; each endpoint
+ has a maximum packet size.
+ Drivers must often be aware of conventions such as flagging the end
+ of bulk transfers using "short" (including zero length) packets.
+
+
+ The Linux USB API supports synchronous calls for
+ control and bulk messaging.
+ It also supports asynchnous calls for all kinds of data transfer,
+ using request structures called "URBs" (USB Request Blocks).
+
+
+
+
+ Accordingly, the USB Core API exposed to device drivers
+ covers quite a lot of territory. You'll probably need to consult
+ the USB 2.0 specification, available online from www.usb.org at
+ no cost, as well as class or device specifications.
+
+
+ The only host-side drivers that actually touch hardware
+ (reading/writing registers, handling IRQs, and so on) are the HCDs.
+ In theory, all HCDs provide the same functionality through the same
+ API. In practice, that's becoming more true on the 2.5 kernels,
+ but there are still differences that crop up especially with
+ fault handling. Different controllers don't necessarily report
+ the same aspects of failures, and recovery from faults (including
+ software-induced ones like unlinking an URB) isn't yet fully
+ consistent.
+ Device driver authors should make a point of doing disconnect
+ testing (while the device is active) with each different host
+ controller driver, to make sure drivers don't have bugs of
+ their own as well as to make sure they aren't relying on some
+ HCD-specific behavior.
+ (You will need external USB 1.1 and/or
+ USB 2.0 hubs to perform all those tests.)
+
+
+
+
+USB-Standard Types
+
+ In <linux/usb_ch9.h> you will find
+ the USB data types defined in chapter 9 of the USB specification.
+ These data types are used throughout USB, and in APIs including
+ this host side API, gadget APIs, and usbfs.
+
+
+!Iinclude/linux/usb_ch9.h
+
+
+
+Host-Side Data Types and Macros
+
+ The host side API exposes several layers to drivers, some of
+ which are more necessary than others.
+ These support lifecycle models for host side drivers
+ and devices, and support passing buffers through usbcore to
+ some HCD that performs the I/O for the device driver.
+
+
+
+!Iinclude/linux/usb.h
+
+
+
+ USB Core APIs
+
+ There are two basic I/O models in the USB API.
+ The most elemental one is asynchronous: drivers submit requests
+ in the form of an URB, and the URB's completion callback
+ handle the next step.
+ All USB transfer types support that model, although there
+ are special cases for control URBs (which always have setup
+ and status stages, but may not have a data stage) and
+ isochronous URBs (which allow large packets and include
+ per-packet fault reports).
+ Built on top of that is synchronous API support, where a
+ driver calls a routine that allocates one or more URBs,
+ submits them, and waits until they complete.
+ There are synchronous wrappers for single-buffer control
+ and bulk transfers (which are awkward to use in some
+ driver disconnect scenarios), and for scatterlist based
+ streaming i/o (bulk or interrupt).
+
+
+ USB drivers need to provide buffers that can be
+ used for DMA, although they don't necessarily need to
+ provide the DMA mapping themselves.
+ There are APIs to use used when allocating DMA buffers,
+ which can prevent use of bounce buffers on some systems.
+ In some cases, drivers may be able to rely on 64bit DMA
+ to eliminate another kind of bounce buffer.
+
+
+!Edrivers/usb/core/urb.c
+!Edrivers/usb/core/message.c
+!Edrivers/usb/core/file.c
+!Edrivers/usb/core/usb.c
+
+
+ Host Controller APIs
+
+ These APIs are only for use by host controller drivers,
+ most of which implement standard register interfaces such as
+ EHCI, OHCI, or UHCI.
+ UHCI was one of the first interfaces, designed by Intel and
+ also used by VIA; it doesn't do much in hardware.
+ OHCI was designed later, to have the hardware do more work
+ (bigger transfers, tracking protocol state, and so on).
+ EHCI was designed with USB 2.0; its design has features that
+ resemble OHCI (hardware does much more work) as well as
+ UHCI (some parts of ISO support, TD list processing).
+
+
+ There are host controllers other than the "big three",
+ although most PCI based controllers (and a few non-PCI based
+ ones) use one of those interfaces.
+ Not all host controllers use DMA; some use PIO, and there
+ is also a simulator.
+
+
+ The same basic APIs are available to drivers for all
+ those controllers.
+ For historical reasons they are in two layers:
+ struct usb_bus is a rather thin
+ layer that became available in the 2.2 kernels, while
+ struct usb_hcd is a more featureful
+ layer (available in later 2.4 kernels and in 2.5) that
+ lets HCDs share common code, to shrink driver size
+ and significantly reduce hcd-specific behaviors.
+
+
+!Edrivers/usb/core/hcd.c
+!Edrivers/usb/core/hcd-pci.c
+!Edrivers/usb/core/buffer.c
+
+
+
+
diff -Nru a/Documentation/arm/XScale/IOP310/IQ80310 b/Documentation/arm/XScale/IOP310/IQ80310
--- a/Documentation/arm/XScale/IOP310/IQ80310 Tue Mar 4 19:30:13 2003
+++ /dev/null Wed Dec 31 16:00:00 1969
@@ -1,295 +0,0 @@
-
-Board Overview
------------------------------
-
-The Cyclone IQ80310 board is an evaluation platform for Intel's 80200 Xscale
-CPU and 80312 Intelligent I/O chipset (collectively called IOP310 chipset).
-
-The 80312 contains dual PCI hoses (called the ATUs), a PCI-to-PCI bridge,
-three DMA channels (1 on secondary PCI, one on primary PCI ), I2C, I2O
-messaging unit, XOR unit for RAID operations, a bus performance monitoring
-unit, and a memory controller with ECC features.
-
-For more information on the board, see http://developer.intel.com/iio
-
-Port Status
------------------------------
-
-Supported:
-
-- MTD/JFFS/JFFS2
-- NFS root
-- RAMDISK root
-- 2ndary PCI slots
-- Onboard ethernet
-- Serial ports (ttyS0/S1)
-- Cache/TLB locking on 80200 CPU
-- Performance monitoring unit on 80200 CPU
-- 80200 Performance Monitoring Unit
-- Acting as a system controller on Cyclone 80303BP PCI backplane
-- DMA engines (EXPERIMENTAL)
-- 80312 Bus Performance Monitor (EXPERIMENTAL)
-- Application Accelerator Unit (XOR engine for RAID) (EXPERIMENTAL)
-- Messaging Unit (EXPERIMENTAL)
-
-TODO:
-- I2C
-
-Building the Kernel
------------------------------
-make iq80310_config
-make oldconfig
-make dep
-make zImage
-
-This will build an image setup for BOOTP/NFS root support. To change this,
-just run make menuconfig and disable nfs root or add a "root=" option.
-
-Preparing the Hardware
------------------------------
-
-This document assumes you're using a Rev D or newer board running
-Redboot as the bootloader.
-
-The as-supplied RedBoot image appears to leave the first page of RAM
-in a corrupt state such that certain words in that page are unwritable
-and contain random data. The value of the data, and the location within
-the first page changes with each boot, but is generally in the range
-0xa0000150 to 0xa0000fff.
-
-You can grab the source from the ECOS CVS or you can get a prebuilt image
-from:
-
- ftp://source.mvista.com/pub/xscale/iop310/IQ80310/redboot.bin
-
-which is:
-
- # strings redboot.bin | grep bootstrap
- RedBoot(tm) bootstrap and debug environment, version UNKNOWN - built 14:58:21, Aug 15 2001
-
-md5sum of this version:
-
- bcb96edbc6f8e55b16c165930b6e4439 redboot.bin
-
-You have two options to program it:
-
-1. Using the FRU program (see the instructions in the user manual).
-
-2. Using a Linux host, with MTD support built into the host kernel:
- - ensure that the RedBoot image is not locked (issue the following
- command under the existing RedBoot image):
- RedBoot> fis unlock -f 0 -l 0x40000
- - switch S3-1 and S3-2 on.
- - reboot the host
- - login as root
- - identify the 80310 card:
- # lspci
- ...
- 00:0c.1 Memory controller: Intel Corporation 80310 IOP [IO Processor] (rev 01)
- - in this example, bus 0, slot 0c, function 1.
- - insert the MTD modules, and the PCI map module:
- # insmod drivers/mtd/maps/pci.o
- - locate the MTD device (using the bus, slot, function)
- # cat /proc/mtd
- dev: size erasesize name
- mtd0: 00800000 00020000 "00:0c.1"
- - in this example, it is mtd device 0. Yours will be different.
- Check carefully.
- - program the flash
- # cat redboot.bin > /dev/mtdblock0
- - check the kernel message log for errors (some cat commands don't
- error on failure)
- # dmesg
- - switch S3-1 and S3-2 off
- - reboot host
-
-In any case, make sure you do an 'fis init' command once you boot with the new
-RedBoot image.
-
-
-
-Downloading Linux
------------------------------
-
-Assuming you have your development system setup to act as a bootp/dhcp
-server and running tftp:
-
- RedBoot> load -r -b 0xa1008000 /tftpboot/zImage.xs
- Raw file loaded 0xa1008000-0xa1094bd8
-
-If you're not using dhcp/tftp, you can use y-modem instead:
-
- RedBoot> load -r -b 0xa1008000 -m y
-
-Note that on Rev D. of the board, tftp does not work due to intermittent
-interrupt issues, so you need to download using ymodem.
-
-Once the download is completed:
-
- RedBoot> go 0xa1008000
-
-Root Devices
------------------------------
-
-A kernel is not useful without a root filesystem, and you have several
-choices with this board: NFS root, RAMDISK, or JFFS/JFFS2. For development
-purposes, it is suggested that you use NFS root for easy access to various
-tools. Once you're ready to deploy, probably want to utilize JFFS/JFFS2 on
-the flash device.
-
-MTD on the IQ80310
------------------------------
-
-Linux on the IQ80310 supports RedBoot FIS paritioning if it is enabled.
-Out of the box, once you've done 'fis init' on RedBoot, you will get
-the following partitioning scheme:
-
- root@192.168.0.14:~# cat /proc/mtd
- dev: size erasesize name
- mtd0: 00040000 00020000 "RedBoot"
- mtd1: 00040000 00020000 "RedBoot[backup]"
- mtd2: 0075f000 00020000 "unallocated space"
- mtd3: 00001000 00020000 "RedBoot config"
- mtd4: 00020000 00020000 "FIS directory"
-
-To create an FIS directory, you need to use the fis command in RedBoot.
-As an example, you can burn the kernel into the flash once it's downloaded:
-
- RedBoot> fis create -b 0xa1008000 -l 0x8CBAC -r 0xa1008000 -f 0x80000 kernel
- ... Erase from 0x00080000-0x00120000: .....
- ... Program from 0xa1008000-0xa1094bac at 0x00080000: .....
- ... Unlock from 0x007e0000-0x00800000: .
- ... Erase from 0x007e0000-0x00800000: .
- ... Program from 0xa1fdf000-0xa1fff000 at 0x007e0000: .
- ... Lock from 0x007e0000-0x00800000: .
-
- RedBoot> fis list
- Name FLASH addr Mem addr Length Entry point
- RedBoot 0x00000000 0x00000000 0x00040000 0x00000000
- RedBoot[backup] 0x00040000 0x00040000 0x00040000 0x00000000
- RedBoot config 0x007DF000 0x007DF000 0x00001000 0x00000000
- FIS directory 0x007E0000 0x007E0000 0x00020000 0x00000000
- kernel 0x00080000 0xA1008000 0x000A0000 0x00000000
-
-This leads to the following Linux MTD setup:
-
- mtroot@192.168.0.14:~# cat /proc/mtd
- dev: size erasesize name
- mtd0: 00040000 00020000 "RedBoot"
- mtd1: 00040000 00020000 "RedBoot[backup]"
- mtd2: 000a0000 00020000 "kernel"
- mtd3: 006bf000 00020000 "unallocated space"
- mtd4: 00001000 00020000 "RedBoot config"
- mtd5: 00020000 00020000 "FIS directory"
-
-Note that there is not a 1:1 mapping to the number of RedBoot paritions to
-MTD partitions as unused space also gets allocated into MTD partitions.
-
-As an aside, the -r option when creating the Kernel entry allows you to
-simply do an 'fis load kernel' to copy the image from flash into memory.
-You can then do an 'fis go 0xa1008000' to start Linux.
-
-If you choose to use static partitioning instead of the RedBoot partioning:
-
- /dev/mtd0 0x00000000 - 0x0007ffff: Boot Monitor (512k)
- /dev/mtd1 0x00080000 - 0x0011ffff: Kernel Image (640K)
- /dev/mtd2 0x00120000 - 0x0071ffff: File System (6M)
- /dev/mtd3 0x00720000 - 0x00800000: RedBoot Reserved (896K)
-
-To use a JFFS1/2 root FS, you need to donwload the JFFS image using either
-tftp or ymodem, and then copy it to flash:
-
- RedBoot> load -r -b 0xa1000000 /tftpboot/jffs.img
- Raw file loaded 0xa1000000-0xa1600000
- RedBoot> fis create -b 0xa1000000 -l 0x600000 -f 0x120000 jffs
- ... Erase from 0x00120000-0x00720000: ..................................
- ... Program from 0xa1000000-0xa1600000 at 0x00120000: ..................
- ......................
- ... Unlock from 0x007e0000-0x00800000: .
- ... Erase from 0x007e0000-0x00800000: .
- ... Program from 0xa1fdf000-0xa1fff000 at 0x007e0000: .
- ... Lock from 0x007e0000-0x00800000: .
- RedBoot> fis list
- Name FLASH addr Mem addr Length Entry point
- RedBoot 0x00000000 0x00000000 0x00040000 0x00000000
- RedBoot[backup] 0x00040000 0x00040000 0x00040000 0x00000000
- RedBoot config 0x007DF000 0x007DF000 0x00001000 0x00000000
- FIS directory 0x007E0000 0x007E0000 0x00020000 0x00000000
- kernel 0x00080000 0xA1008000 0x000A0000 0xA1008000
- jffs 0x00120000 0x00120000 0x00600000 0x00000000
-
-This looks like this in Linux:
-
- root@192.168.0.14:~# cat /proc/mtd
- dev: size erasesize name
- mtd0: 00040000 00020000 "RedBoot"
- mtd1: 00040000 00020000 "RedBoot[backup]"
- mtd2: 000a0000 00020000 "kernel"
- mtd3: 00600000 00020000 "jffs"
- mtd4: 000bf000 00020000 "unallocated space"
- mtd5: 00001000 00020000 "RedBoot config"
- mtd6: 00020000 00020000 "FIS directory"
-
-You need to boot the kernel once and watch the boot messages to see how the
-JFFS RedBoot partition mapped into the MTD partition scheme.
-
-You can grab a pre-built JFFS image to use as a root file system at:
-
- ftp://source.mvista.com/pub/xscale/iop310/IQ80310/jffs.img
-
-For detailed info on using MTD and creating a JFFS image go to:
-
- http://www.linux-mtd.infradead.org.
-
-For details on using RedBoot's FIS commands, type 'fis help' or consult
-your RedBoot manual.
-
-Contributors
------------------------------
-
-Thanks to Intel Corporation for providing the hardware.
-
-John Clark - Initial discovery of RedBoot issues
-Dave Jiang - IRQ demux fixes, AAU, DMA, MU
-Nicolas Pitre - Initial port, cleanup, debugging
-Matt Porter - PCI subsystem development, debugging
-Tim Sanders - Initial PCI code
-Mark Salter - RedBoot fixes
-Deepak Saxena - Cleanup, debug, cache lock, PMU
-
------------------------------
-Enjoy.
-
-If you have any problems please contact Deepak Saxena
-
-A few notes from rmk
------------------------------
-
-These are notes of my initial experience getting the IQ80310 Rev D up and
-running. In total, it has taken many hours to work out what's going on...
-The version of redboot used is:
-
- RedBoot(tm) bootstrap and debug environment, version UNKNOWN - built 14:58:21, Aug 15 2001
-
-
-1. I've had a corrupted download of the redboot.bin file from Montavista's
- FTP site. It would be a good idea if there were md5sums, sum or gpg
- signatures available to ensure the integrity of the downloaded files.
- The result of this was an apparantly 100% dead card.
-
-2. RedBoot Intel EtherExpress Pro 100 driver seems to be very unstable -
- I've had it take out the whole of a 100mbit network for several minutes.
- The Hub indiates ZERO activity, despite machines attempting to communicate.
- Further to this, while tftping the kernel, the transfer will stall regularly,
- and might even drop the link LED.
-
-3. There appears to be a bug in the Intel Documentation Pack that comes with
- the IQ80310 board. Serial port 1, which is the socket next to the LEDs
- is address 0xfe810000, not 0xfe800000.
-
- Note that RedBoot uses either serial port 1 OR serial port 2, so if you
- have your console connected to the wrong port, you'll see redboot messages
- but not kernel boot messages.
-
-4. Trying to use fconfig to setup a boot script fails - it hangs when trying
- to erase the flash.
diff -Nru a/Documentation/arm/XScale/IOP3XX/IQ80310 b/Documentation/arm/XScale/IOP3XX/IQ80310
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/Documentation/arm/XScale/IOP3XX/IQ80310 Tue Mar 4 19:30:13 2003
@@ -0,0 +1,248 @@
+
+Board Overview
+-----------------------------
+
+The Cyclone IQ80310 board is an evaluation platform for Intel's 80200 Xscale
+CPU and 80312 Intelligent I/O chipset (collectively called IOP310 chipset).
+
+The 80312 contains dual PCI hoses (called the ATUs), a PCI-to-PCI bridge,
+three DMA channels (1 on secondary PCI, one on primary PCI ), I2C, I2O
+messaging unit, XOR unit for RAID operations, a bus performance monitoring
+unit, and a memory controller with ECC features.
+
+For more information on the board, see http://developer.intel.com/iio
+
+Port Status
+-----------------------------
+
+Supported:
+
+- MTD/JFFS/JFFS2
+- NFS root
+- RAMDISK root
+- 2ndary PCI slots
+- Onboard ethernet
+- Serial ports (ttyS0/S1)
+- Cache/TLB locking on 80200 CPU
+- Performance monitoring unit on 80200 CPU
+- 80200 Performance Monitoring Unit
+- Acting as a system controller on Cyclone 80303BP PCI backplane
+- DMA engines (EXPERIMENTAL)
+- 80312 Bus Performance Monitor (EXPERIMENTAL)
+- Application Accelerator Unit (XOR engine for RAID) (EXPERIMENTAL)
+- Messaging Unit (EXPERIMENTAL)
+
+TODO:
+- I2C
+
+Building the Kernel
+-----------------------------
+make iq80310_config
+make oldconfig
+make dep
+make zImage
+
+This will build an image setup for BOOTP/NFS root support. To change this,
+just run make menuconfig and disable nfs root or add a "root=" option.
+
+Preparing the Hardware
+-----------------------------
+
+This document assumes you're using a Rev D or newer board running
+Redboot as the bootloader. Note that the version of RedBoot provided
+with the boards has a major issue and you need to replace it with the
+latest RedBoot. You can grab the source from the ECOS CVS or you can
+get a prebuilt image and burn it in using FRU at:
+
+ ftp://source.mvista.com/pub/xscale/iq80310/redboot.bin
+
+Make sure you do an 'fis init' command once you boot with the new
+RedBoot image.
+
+
+
+Downloading Linux
+-----------------------------
+
+Assuming you have your development system setup to act as a bootp/dhcp
+server and running tftp:
+
+ RedBoot> load -r -b 0xa1008000 /tftpboot/zImage.xs
+ Raw file loaded 0xa1008000-0xa1094bd8
+
+If you're not using dhcp/tftp, you can use y-modem instead:
+
+ RedBoot> load -r -b 0xa1008000 -m y
+
+Note that on Rev D. of the board, tftp does not work due to intermittent
+interrupt issues, so you need to download using ymodem.
+
+Once the download is completed:
+
+ RedBoot> go 0xa1008000
+
+Root Devices
+-----------------------------
+
+A kernel is not useful without a root filesystem, and you have several
+choices with this board: NFS root, RAMDISK, or JFFS/JFFS2. For development
+purposes, it is suggested that you use NFS root for easy access to various
+tools. Once you're ready to deploy, probably want to utilize JFFS/JFFS2 on
+the flash device.
+
+MTD on the IQ80310
+-----------------------------
+
+Linux on the IQ80310 supports RedBoot FIS paritioning if it is enabled.
+Out of the box, once you've done 'fis init' on RedBoot, you will get
+the following partitioning scheme:
+
+ root@192.168.0.14:~# cat /proc/mtd
+ dev: size erasesize name
+ mtd0: 00040000 00020000 "RedBoot"
+ mtd1: 00040000 00020000 "RedBoot[backup]"
+ mtd2: 0075f000 00020000 "unallocated space"
+ mtd3: 00001000 00020000 "RedBoot config"
+ mtd4: 00020000 00020000 "FIS directory"
+
+To create an FIS directory, you need to use the fis command in RedBoot.
+As an example, you can burn the kernel into the flash once it's downloaded:
+
+ RedBoot> fis create -b 0xa1008000 -l 0x8CBAC -r 0xa1008000 -f 0x80000 kernel
+ ... Erase from 0x00080000-0x00120000: .....
+ ... Program from 0xa1008000-0xa1094bac at 0x00080000: .....
+ ... Unlock from 0x007e0000-0x00800000: .
+ ... Erase from 0x007e0000-0x00800000: .
+ ... Program from 0xa1fdf000-0xa1fff000 at 0x007e0000: .
+ ... Lock from 0x007e0000-0x00800000: .
+
+ RedBoot> fis list
+ Name FLASH addr Mem addr Length Entry point
+ RedBoot 0x00000000 0x00000000 0x00040000 0x00000000
+ RedBoot[backup] 0x00040000 0x00040000 0x00040000 0x00000000
+ RedBoot config 0x007DF000 0x007DF000 0x00001000 0x00000000
+ FIS directory 0x007E0000 0x007E0000 0x00020000 0x00000000
+ kernel 0x00080000 0xA1008000 0x000A0000 0x00000000
+
+This leads to the following Linux MTD setup:
+
+ mtroot@192.168.0.14:~# cat /proc/mtd
+ dev: size erasesize name
+ mtd0: 00040000 00020000 "RedBoot"
+ mtd1: 00040000 00020000 "RedBoot[backup]"
+ mtd2: 000a0000 00020000 "kernel"
+ mtd3: 006bf000 00020000 "unallocated space"
+ mtd4: 00001000 00020000 "RedBoot config"
+ mtd5: 00020000 00020000 "FIS directory"
+
+Note that there is not a 1:1 mapping to the number of RedBoot paritions to
+MTD partitions as unused space also gets allocated into MTD partitions.
+
+As an aside, the -r option when creating the Kernel entry allows you to
+simply do an 'fis load kernel' to copy the image from flash into memory.
+You can then do an 'fis go 0xa1008000' to start Linux.
+
+If you choose to use static partitioning instead of the RedBoot partioning:
+
+ /dev/mtd0 0x00000000 - 0x0007ffff: Boot Monitor (512k)
+ /dev/mtd1 0x00080000 - 0x0011ffff: Kernel Image (640K)
+ /dev/mtd2 0x00120000 - 0x0071ffff: File System (6M)
+ /dev/mtd3 0x00720000 - 0x00800000: RedBoot Reserved (896K)
+
+To use a JFFS1/2 root FS, you need to donwload the JFFS image using either
+tftp or ymodem, and then copy it to flash:
+
+ RedBoot> load -r -b 0xa1000000 /tftpboot/jffs.img
+ Raw file loaded 0xa1000000-0xa1600000
+ RedBoot> fis create -b 0xa1000000 -l 0x600000 -f 0x120000 jffs
+ ... Erase from 0x00120000-0x00720000: ..................................
+ ... Program from 0xa1000000-0xa1600000 at 0x00120000: ..................
+ ......................
+ ... Unlock from 0x007e0000-0x00800000: .
+ ... Erase from 0x007e0000-0x00800000: .
+ ... Program from 0xa1fdf000-0xa1fff000 at 0x007e0000: .
+ ... Lock from 0x007e0000-0x00800000: .
+ RedBoot> fis list
+ Name FLASH addr Mem addr Length Entry point
+ RedBoot 0x00000000 0x00000000 0x00040000 0x00000000
+ RedBoot[backup] 0x00040000 0x00040000 0x00040000 0x00000000
+ RedBoot config 0x007DF000 0x007DF000 0x00001000 0x00000000
+ FIS directory 0x007E0000 0x007E0000 0x00020000 0x00000000
+ kernel 0x00080000 0xA1008000 0x000A0000 0xA1008000
+ jffs 0x00120000 0x00120000 0x00600000 0x00000000
+
+This looks like this in Linux:
+
+ root@192.168.0.14:~# cat /proc/mtd
+ dev: size erasesize name
+ mtd0: 00040000 00020000 "RedBoot"
+ mtd1: 00040000 00020000 "RedBoot[backup]"
+ mtd2: 000a0000 00020000 "kernel"
+ mtd3: 00600000 00020000 "jffs"
+ mtd4: 000bf000 00020000 "unallocated space"
+ mtd5: 00001000 00020000 "RedBoot config"
+ mtd6: 00020000 00020000 "FIS directory"
+
+You need to boot the kernel once and watch the boot messages to see how the
+JFFS RedBoot partition mapped into the MTD partition scheme.
+
+You can grab a pre-built JFFS image to use as a root file system at:
+
+ ftp://source.mvista.com/pub/xscale/iq80310/jffs.img
+
+For detailed info on using MTD and creating a JFFS image go to:
+
+ http://www.linux-mtd.infradead.org.
+
+For details on using RedBoot's FIS commands, type 'fis help' or consult
+your RedBoot manual.
+
+Contributors
+-----------------------------
+
+Thanks to Intel Corporation for providing the hardware.
+
+John Clark - Initial discovery of RedBoot issues
+Dave Jiang - IRQ demux fixes, AAU, DMA, MU
+Nicolas Pitre - Initial port, cleanup, debugging
+Matt Porter - PCI subsystem development, debugging
+Tim Sanders - Initial PCI code
+Mark Salter - RedBoot fixes
+Deepak Saxena - Cleanup, debug, cache lock, PMU
+
+-----------------------------
+Enjoy.
+
+If you have any problems please contact Deepak Saxena
+
+A few notes from rmk
+-----------------------------
+
+These are notes of my initial experience getting the IQ80310 Rev D up and
+running. In total, it has taken many hours to work out what's going on...
+The version of redboot used is:
+
+ RedBoot(tm) bootstrap and debug environment, version UNKNOWN - built 14:58:21, Aug 15 2001
+
+
+1. I've had a corrupted download of the redboot.bin file from Montavista's
+ FTP site. It would be a good idea if there were md5sums, sum or gpg
+ signatures available to ensure the integrity of the downloaded files.
+ The result of this was an apparantly 100% dead card.
+
+2. RedBoot Intel EtherExpress Pro 100 driver seems to be very unstable -
+ I've had it take out the whole of a 100mbit network for several minutes.
+ The Hub indiates ZERO activity, despite machines attempting to communicate.
+ Further to this, while tftping the kernel, the transfer will stall regularly,
+ and might even drop the link LED.
+
+3. There appears to be a bug in the Intel Documentation Pack that comes with
+ the IQ80310 board. Serial port 1, which is the socket next to the LEDs
+ is address 0xfe810000, not 0xfe800000.
+
+ Note that RedBoot uses either serial port 1 OR serial port 2, so if you
+ have your console connected to the wrong port, you'll see redboot messages
+ but not kernel boot messages.
+
+4. Trying to use fconfig to setup a boot script fails - it hangs when trying
+ to erase the flash.
diff -Nru a/Documentation/arm/XScale/IOP3XX/IQ80321 b/Documentation/arm/XScale/IOP3XX/IQ80321
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/Documentation/arm/XScale/IOP3XX/IQ80321 Tue Mar 4 19:30:14 2003
@@ -0,0 +1,216 @@
+
+Board Overview
+-----------------------------
+
+The Worcester IQ80321 board is an evaluation platform for Intel's 80321 Xscale
+CPU (sometimes called IOP321 chipset).
+
+The 80321 contains a single PCI hose (called the ATUs), a PCI-to-PCI bridge,
+two DMA channels, I2C, I2O messaging unit, XOR unit for RAID operations,
+a bus performance monitoring unit, and a memory controller with ECC features.
+
+For more information on the board, see http://developer.intel.com/iio
+
+Port Status
+-----------------------------
+
+Supported:
+
+- MTD/JFFS/JFFS2 root
+- NFS root
+- RAMDISK root
+- Serial port (ttyS0)
+- Cache/TLB locking on 80321 CPU
+- Performance monitoring unit on 80321 CPU
+
+TODO:
+
+- DMA engines
+- I2C
+- 80321 Bus Performance Monitor
+- Application Accelerator Unit (XOR engine for RAID)
+- I2O Messaging Unit
+- I2C unit
+- SSP
+
+Building the Kernel
+-----------------------------
+make iq80321_config
+make oldconfig
+make dep
+make zImage
+
+This will build an image setup for BOOTP/NFS root support. To change this,
+just run make menuconfig and disable nfs root or add a "root=" option.
+
+Preparing the Hardware
+-----------------------------
+
+Make sure you do an 'fis init' command once you boot with the new
+RedBoot image.
+
+Downloading Linux
+-----------------------------
+
+Assuming you have your development system setup to act as a bootp/dhcp
+server and running tftp:
+
+NOTE: The 80321 board uses a different default memory map than the 80310.
+
+ RedBoot> load -r -b 0x01008000 -m y
+
+Once the download is completed:
+
+ RedBoot> go 0x01008000
+
+There is a version of RedBoot floating around that has DHCP support, but
+I've never been able to cleanly transfer a kernel image and have it run.
+
+Root Devices
+-----------------------------
+
+A kernel is not useful without a root filesystem, and you have several
+choices with this board: NFS root, RAMDISK, or JFFS/JFFS2. For development
+purposes, it is suggested that you use NFS root for easy access to various
+tools. Once you're ready to deploy, probably want to utilize JFFS/JFFS2 on
+the flash device.
+
+MTD on the IQ80321
+-----------------------------
+
+Linux on the IQ80321 supports RedBoot FIS paritioning if it is enabled.
+Out of the box, once you've done 'fis init' on RedBoot, you will get
+the following partitioning scheme:
+
+ root@192.168.0.14:~# cat /proc/mtd
+ dev: size erasesize name
+ mtd0: 00040000 00020000 "RedBoot"
+ mtd1: 00040000 00020000 "RedBoot[backup]"
+ mtd2: 0075f000 00020000 "unallocated space"
+ mtd3: 00001000 00020000 "RedBoot config"
+ mtd4: 00020000 00020000 "FIS directory"
+
+To create an FIS directory, you need to use the fis command in RedBoot.
+As an example, you can burn the kernel into the flash once it's downloaded:
+
+ RedBoot> fis create -b 0x01008000 -l 0x8CBAC -r 0x01008000 -f 0x80000 kernel
+ ... Erase from 0x00080000-0x00120000: .....
+ ... Program from 0x01008000-0x01094bac at 0x00080000: .....
+ ... Unlock from 0x007e0000-0x00800000: .
+ ... Erase from 0x007e0000-0x00800000: .
+ ... Program from 0x01fdf000-0x01fff000 at 0x007e0000: .
+ ... Lock from 0x007e0000-0x00800000: .
+
+ RedBoot> fis list
+ Name FLASH addr Mem addr Length Entry point
+ RedBoot 0x00000000 0x00000000 0x00040000 0x00000000
+ RedBoot[backup] 0x00040000 0x00040000 0x00040000 0x00000000
+ RedBoot config 0x007DF000 0x007DF000 0x00001000 0x00000000
+ FIS directory 0x007E0000 0x007E0000 0x00020000 0x00000000
+ kernel 0x00080000 0x01008000 0x000A0000 0x00000000
+
+This leads to the following Linux MTD setup:
+
+ mtroot@192.168.0.14:~# cat /proc/mtd
+ dev: size erasesize name
+ mtd0: 00040000 00020000 "RedBoot"
+ mtd1: 00040000 00020000 "RedBoot[backup]"
+ mtd2: 000a0000 00020000 "kernel"
+ mtd3: 006bf000 00020000 "unallocated space"
+ mtd4: 00001000 00020000 "RedBoot config"
+ mtd5: 00020000 00020000 "FIS directory"
+
+Note that there is not a 1:1 mapping to the number of RedBoot paritions to
+MTD partitions as unused space also gets allocated into MTD partitions.
+
+As an aside, the -r option when creating the Kernel entry allows you to
+simply do an 'fis load kernel' to copy the image from flash into memory.
+You can then do an 'fis go 0x01008000' to start Linux.
+
+If you choose to use static partitioning instead of the RedBoot partioning:
+
+ /dev/mtd0 0x00000000 - 0x0007ffff: Boot Monitor (512k)
+ /dev/mtd1 0x00080000 - 0x0011ffff: Kernel Image (640K)
+ /dev/mtd2 0x00120000 - 0x0071ffff: File System (6M)
+ /dev/mtd3 0x00720000 - 0x00800000: RedBoot Reserved (896K)
+
+To use a JFFS1/2 root FS, you need to donwload the JFFS image using either
+tftp or ymodem, and then copy it to flash:
+
+ RedBoot> load -r -b 0x01000000 /tftpboot/jffs.img
+ Raw file loaded 0x01000000-0x01600000
+ RedBoot> fis create -b 0x01000000 -l 0x600000 -f 0x120000 jffs
+ ... Erase from 0x00120000-0x00720000: ..................................
+ ... Program from 0x01000000-0x01600000 at 0x00120000: ..................
+ ......................
+ ... Unlock from 0x007e0000-0x00800000: .
+ ... Erase from 0x007e0000-0x00800000: .
+ ... Program from 0x01fdf000-0x01fff000 at 0x007e0000: .
+ ... Lock from 0x007e0000-0x00800000: .
+ RedBoot> fis list
+ Name FLASH addr Mem addr Length Entry point
+ RedBoot 0x00000000 0x00000000 0x00040000 0x00000000
+ RedBoot[backup] 0x00040000 0x00040000 0x00040000 0x00000000
+ RedBoot config 0x007DF000 0x007DF000 0x00001000 0x00000000
+ FIS directory 0x007E0000 0x007E0000 0x00020000 0x00000000
+ kernel 0x00080000 0x01008000 0x000A0000 0x01008000
+ jffs 0x00120000 0x00120000 0x00600000 0x00000000
+
+This looks like this in Linux:
+
+ root@192.168.0.14:~# cat /proc/mtd
+ dev: size erasesize name
+ mtd0: 00040000 00020000 "RedBoot"
+ mtd1: 00040000 00020000 "RedBoot[backup]"
+ mtd2: 000a0000 00020000 "kernel"
+ mtd3: 00600000 00020000 "jffs"
+ mtd4: 000bf000 00020000 "unallocated space"
+ mtd5: 00001000 00020000 "RedBoot config"
+ mtd6: 00020000 00020000 "FIS directory"
+
+You need to boot the kernel once and watch the boot messages to see how the
+JFFS RedBoot partition mapped into the MTD partition scheme.
+
+You can grab a pre-built JFFS image to use as a root file system at:
+
+ ftp://source.mvista.com/pub/xscale/iq80310/jffs.img
+
+For detailed info on using MTD and creating a JFFS image go to:
+
+ http://www.linux-mtd.infradead.org.
+
+For details on using RedBoot's FIS commands, type 'fis help' or consult
+your RedBoot manual.
+
+BUGS and ISSUES
+-----------------------------
+
+* As shipped from Intel, pre-production boards have two issues:
+
+- The on board ethernet is disabled S8E1-2 is off. You will need to turn it on.
+
+- The PCIXCAPs are configured for a 100Mhz clock, but the clock selected is
+ actually only 66Mhz. This causes the wrong PPL multiplier to be used and the
+ board only runs at 400Mhz instead of 600Mhz. The way to observe this is to
+ use a independent clock to time a "sleep 10" command from the prompt. If it
+ takes 15 seconds instead of 10, you are running at 400Mhz.
+
+- The experimental IOP310 drivers for the AAU, DMA, etc. are not supported yet.
+
+Contributors
+-----------------------------
+The port to the IQ80321 was performed by:
+
+Rory Bolt - Initial port, debugging.
+
+This port was based on the IQ80310 port with the following contributors:
+
+Nicolas Pitre - Initial port, cleanup, debugging
+Matt Porter - PCI subsystem development, debugging
+Tim Sanders - Initial PCI code
+Deepak Saxena - Cleanup, debug, cache lock, PMU
+
+The port is currently maintained by Deepak Saxena
+
+-----------------------------
+Enjoy.
diff -Nru a/Documentation/arm/XScale/IOP3XX/aau.txt b/Documentation/arm/XScale/IOP3XX/aau.txt
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/Documentation/arm/XScale/IOP3XX/aau.txt Tue Mar 4 19:30:14 2003
@@ -0,0 +1,178 @@
+Support functions for the Intel 80310 AAU
+===========================================
+
+Dave Jiang
+Last updated: 09/18/2001
+
+The Intel 80312 companion chip in the 80310 chipset contains an AAU. The
+AAU is capable of processing up to 8 data block sources and perform XOR
+operations on them. This unit is typically used to accelerated XOR
+operations utilized by RAID storage device drivers such as RAID 5. This
+API is designed to provide a set of functions to take adventage of the
+AAU. The AAU can also be used to transfer data blocks and used as a memory
+copier. The AAU transfer the memory faster than the operation performed by
+using CPU copy therefore it is recommended to use the AAU for memory copy.
+
+------------------
+int aau_request(u32 *aau_context, const char *device_id);
+This function allows the user the acquire the control of the the AAU. The
+function will return a context of AAU to the user and allocate
+an interrupt for the AAU. The user must pass the context as a parameter to
+various AAU API calls.
+
+int aau_queue_buffer(u32 aau_context, aau_head_t *listhead);
+This function starts the AAU operation. The user must create a SGL
+header with a SGL attached. The format is presented below. The SGL is
+built from kernel memory.
+
+/* hardware descriptor */
+typedef struct _aau_desc
+{
+ u32 NDA; /* next descriptor address [READONLY] */
+ u32 SAR[AAU_SAR_GROUP]; /* src addrs */
+ u32 DAR; /* destination addr */
+ u32 BC; /* byte count */
+ u32 DC; /* descriptor control */
+ u32 SARE[AAU_SAR_GROUP]; /* extended src addrs */
+} aau_desc_t;
+
+/* user SGL format */
+typedef struct _aau_sgl
+{
+ aau_desc_t aau_desc; /* AAU HW Desc */
+ u32 status; /* status of SGL [READONLY] */
+ struct _aau_sgl *next; /* pointer to next SG [READONLY] */
+ void *dest; /* destination addr */
+ void *src[AAU_SAR_GROUP]; /* source addr[4] */
+ void *ext_src[AAU_SAR_GROUP]; /* ext src addr[4] */
+ u32 total_src; /* total number of source */
+} aau_sgl_t;
+
+/* header for user SGL */
+typedef struct _aau_head
+{
+ u32 total; /* total descriptors allocated */
+ u32 status; /* SGL status */
+ aau_sgl_t *list; /* ptr to head of list */
+ aau_callback_t callback; /* callback func ptr */
+} aau_head_t;
+
+
+The function will call aau_start() and start the AAU after it queues
+the SGL to the processing queue. When the function will either
+a. Sleep on the wait queue aau->wait_q if no callback has been provided, or
+b. Continue and then call the provided callback function when DMA interrupt
+ has been triggered.
+
+int aau_suspend(u32 aau_context);
+Stops/Suspends the AAU operation
+
+int aau_free(u32 aau_context);
+Frees the ownership of AAU. Called when no longer need AAU service.
+
+aau_sgl_t * aau_get_buffer(u32 aau_context, int num_buf);
+This function obtains an AAU SGL for the user. User must specify the number
+of descriptors to be allocated in the chain that is returned.
+
+void aau_return_buffer(u32 aau_context, aau_sgl_t *list);
+This function returns all SGL back to the API after user is done.
+
+int aau_memcpy(void *dest, void *src, u32 size);
+This function is a short cut for user to do memory copy utilizing the AAU for
+better large block memory copy vs using the CPU. This is similar to using
+typical memcpy() call.
+
+* User is responsible for the source address(es) and the destination address.
+ The source and destination should all be cached memory.
+
+
+
+void aau_test()
+{
+ u32 aau;
+ char dev_id[] = "AAU";
+ int size = 2;
+ int err = 0;
+ aau_head_t *head;
+ aau_sgl_t *list;
+ u32 i;
+ u32 result = 0;
+ void *src, *dest;
+
+ printk("Starting AAU test\n");
+ if((err = aau_request(&aau, dev_id))<0)
+ {
+ printk("test - AAU request failed: %d\n", err);
+ return;
+ }
+ else
+ {
+ printk("test - AAU request successful\n");
+ }
+
+ head = kmalloc(sizeof(aau_head_t), GFP_KERNEL);
+ head->total = size;
+ head->status = 0;
+ head->callback = NULL;
+
+ list = aau_get_buffer(aau, size);
+ if(!list)
+ {
+ printk("Can't get buffers\n");
+ return;
+ }
+ head->list = list;
+
+ src = kmalloc(1024, GFP_KERNEL);
+ dest = kmalloc(1024, GFP_KERNEL);
+
+ while(list)
+ {
+ list->status = 0;
+ list->aau_desc->SAR[0] = (u32)src;
+ list->aau_desc->DAR = (u32)dest;
+ list->aau_desc->BC = 1024;
+
+ /* see iop310-aau.h for more DCR commands */
+ list->aau_desc->DC = AAU_DCR_WRITE | AAU_DCR_BLKCTRL_1_DF;
+ if(!list->next)
+ {
+ list->aau_desc->DC = AAU_DCR_IE;
+ break;
+ }
+ list = list->next;
+ }
+
+ printk("test- Queueing buffer for AAU operation\n");
+ err = aau_queue_buffer(aau, head);
+ if(err >= 0)
+ {
+ printk("AAU Queue Buffer is done...\n");
+ }
+ else
+ {
+ printk("AAU Queue Buffer failed...: %d\n", err);
+ }
+
+
+
+#if 1
+ printk("freeing the AAU\n");
+ aau_return_buffer(aau, head->list);
+ aau_free(aau);
+ kfree(src);
+ kfree(dest);
+ kfree((void *)head);
+#endif
+}
+
+All Disclaimers apply. Use this at your own discretion. Neither Intel nor I
+will be responsible if anything goes wrong. =)
+
+
+TODO
+____
+* Testing
+* Do zero-size AAU transfer/channel at init
+ so all we have to do is chainining
+
diff -Nru a/Documentation/arm/XScale/IOP3XX/dma.txt b/Documentation/arm/XScale/IOP3XX/dma.txt
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/Documentation/arm/XScale/IOP3XX/dma.txt Tue Mar 4 19:30:14 2003
@@ -0,0 +1,214 @@
+Support functions forthe Intel 80310 DMA channels
+==================================================
+
+Dave Jiang
+Last updated: 09/18/2001
+
+The Intel 80310 XScale chipset provides 3 DMA channels via the 80312 I/O
+companion chip. Two of them resides on the primary PCI bus and one on the
+secondary PCI bus.
+
+The DMA API provided is not compatible with the generic interface in the
+ARM tree unfortunately due to how the 80312 DMACs work. Hopefully some time
+in the near future a software interface can be done to bridge the differences.
+The DMA API has been modeled after Nicholas Pitre's SA11x0 DMA API therefore
+they will look somewhat similar.
+
+
+80310 DMA API
+-------------
+
+int dma_request(dmach_t channel, const char *device_id);
+
+This function will attempt to allocate the channel depending on what the
+user requests:
+
+IOP310_DMA_P0: PCI Primary 1
+IOP310_DMA_P1: PCI Primary 2
+IOP310_DMA_S0: PCI Secondary 1
+/*EOF*/
+
+Once the user allocates the DMA channel it is owned until released. Although
+other users can also use the same DMA channel, but no new resources will be
+allocated. The function will return the allocated channel number if successful.
+
+int dma_queue_buffer(dmach_t channel, dma_sghead_t *listhead);
+
+The user will construct a SGL in the form of below:
+/*
+ * Scattered Gather DMA List for user
+ */
+typedef struct _dma_desc
+{
+ u32 NDAR; /* next descriptor adress [READONLY] */
+ u32 PDAR; /* PCI address */
+ u32 PUADR; /* upper PCI address */
+ u32 LADR; /* local address */
+ u32 BC; /* byte count */
+ u32 DC; /* descriptor control */
+} dma_desc_t;
+
+typedef struct _dma_sgl
+{
+ dma_desc_t dma_desc; /* DMA descriptor */
+ u32 status; /* descriptor status [READONLY] */
+ u32 data; /* user defined data */
+ struct _dma_sgl *next; /* next descriptor [READONLY] */
+} dma_sgl_t;
+
+/* dma sgl head */
+typedef struct _dma_head
+{
+ u32 total; /* total elements in SGL */
+ u32 status; /* status of sgl */
+ u32 mode; /* read or write mode */
+ dma_sgl_t *list; /* pointer to list */
+ dma_callback_t callback; /* callback function */
+} dma_head_t;
+
+
+The user shall allocate user SGL elements by calling the function:
+dma_get_buffer(). This function will give the user an SGL element. The user
+is responsible for creating the SGL head however. The user is also
+responsible for allocating the memory for DMA data. The following code segment
+shows how a DMA operation can be performed:
+
+#include
+
+void dma_test(void)
+{
+ char dev_id[] = "Primary 0";
+ dma_head_t *sgl_head = NULL;
+ dma_sgl_t *sgl = NULL;
+ int err = 0;
+ int channel = -1;
+ u32 *test_ptr = 0;
+ DECLARE_WAIT_QUEUE_HEAD(wait_q);
+
+
+ *(IOP310_ATUCR) = (IOP310_ATUCR_PRIM_OUT_ENAB |
+ IOP310_ATUCR_DIR_ADDR_ENAB);
+
+ channel = dma_request(IOP310_DMA_P0, dev_id);
+
+ sgl_head = (dma_head_t *)kmalloc(sizeof(dma_head_t), GFP_KERNEL);
+ sgl_head->callback = NULL; /* no callback created */
+ sgl_head->total = 2; /* allocating 2 DMA descriptors */
+ sgl_head->mode = (DMA_MOD_WRITE);
+ sgl_head->status = 0;
+
+ /* now we get the two descriptors */
+ sgl = dma_get_buffer(channel, 2);
+
+ /* we set the header to point to the list we allocated */
+ sgl_head->list = sgl;
+
+ /* allocate 1k of DMA data */
+ sgl->data = (u32)kmalloc(1024, GFP_KERNEL);
+
+ /* Local address is physical */
+ sgl->dma_desc.LADR = (u32)virt_to_phys(sgl->data);
+
+ /* write to arbitrary location over the PCI bus */
+ sgl->dma_desc.PDAR = 0x00600000;
+ sgl->dma_desc.PUADR = 0;
+ sgl->dma_desc.BC = 1024;
+
+ /* set write & invalidate PCI command */
+ sgl->dma_desc.DC = DMA_DCR_PCI_MWI;
+ sgl->status = 0;
+
+ /* set a pattern */
+ memset(sgl->data, 0xFF, 1024);
+
+ /* User's responsibility to keep buffers cached coherent */
+ cpu_dcache_clean(sgl->data, sgl->data + 1024);
+
+ sgl = sgl->next;
+
+ sgl->data = (u32)kmalloc(1024, GFP_KERNEL);
+ sgl->dma_desc.LADR = (u32)virt_to_phys(sgl->data);
+ sgl->dma_desc.PDAR = 0x00610000;
+ sgl->dma_desc.PUADR = 0;
+ sgl->dma_desc.BC = 1024;
+
+ /* second descriptor has interrupt flag enabled */
+ sgl->dma_desc.DC = (DMA_DCR_PCI_MWI | DMA_DCR_IE);
+
+ /* must set end of chain flag */
+ sgl->status = DMA_END_CHAIN; /* DO NOT FORGET THIS!!!! */
+
+ memset(sgl->data, 0x0f, 1024);
+ /* User's responsibility to keep buffers cached coherent */
+ cpu_dcache_clean(sgl->data, sgl->data + 1024);
+
+ /* queing the buffer, this function will sleep since no callback */
+ err = dma_queue_buffer(channel, sgl_head);
+
+ /* now we are woken from DMA complete */
+
+ /* do data operations here */
+
+ /* free DMA data if necessary */
+
+ /* return the descriptors */
+ dma_return_buffer(channel, sgl_head->list);
+
+ /* free the DMA */
+ dma_free(channel);
+
+ kfree((void *)sgl_head);
+}
+
+
+dma_sgl_t * dma_get_buffer(dmach_t channel, int buf_num);
+
+This call allocates DMA descriptors for the user.
+
+
+void dma_return_buffer(dmach_t channel, dma_sgl_t *list);
+
+This call returns the allocated descriptors back to the API.
+
+
+int dma_suspend(dmach_t channel);
+
+This call suspends any DMA transfer on the given channel.
+
+
+
+int dma_resume(dmach_t channel);
+
+This call resumes a DMA transfer which would have been stopped through
+dma_suspend().
+
+
+int dma_flush_all(dmach_t channel);
+
+This completely flushes all queued buffers and on-going DMA transfers on a
+given channel. This is called when DMA channel errors have occured.
+
+
+void dma_free(dmach_t channel);
+
+This clears all activities on a given DMA channel and releases it for future
+requests.
+
+
+
+Buffer Allocation
+-----------------
+It is the user's responsibility to allocate, free, and keep track of the
+allocated DMA data memory. Upon calling dma_queue_buffer() the user must
+relinquish the control of the buffers to the kernel and not change the
+state of the buffers that it has passed to the kernel. The user will regain
+the control of the buffers when it has been woken up by the bottom half of
+the DMA interrupt handler. The user can allocate cached buffers or non-cached
+via pci_alloc_consistent(). It is the user's responsibility to ensure that
+the data is cache coherent.
+
+*Reminder*
+The user is responsble to ensure the ATU is setup properly for DMA transfers.
+
+All Disclaimers apply. Use this at your own discretion. Neither Intel nor I
+will be responsible ifanything goes wrong.
diff -Nru a/Documentation/arm/XScale/IOP3XX/message.txt b/Documentation/arm/XScale/IOP3XX/message.txt
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/Documentation/arm/XScale/IOP3XX/message.txt Tue Mar 4 19:30:14 2003
@@ -0,0 +1,110 @@
+Support functions for the Intel 80310 MU
+===========================================
+
+Dave Jiang
+Last updated: 10/11/2001
+
+The messaging unit of the IOP310 contains 4 components and is utilized for
+passing messages between the PCI agents on the primary bus and the Intel(R)
+80200 CPU. The four components are:
+Messaging Component
+Doorbell Component
+Circular Queues Component
+Index Registers Component
+
+Messaging Component:
+Contains 4 32bit registers, 2 in and 2 out. Writing to the registers assert
+interrupt on the PCI bus or to the 80200 depend on incoming or outgoing.
+
+int mu_msg_request(u32 *mu_context);
+Request the usage of Messaging Component. mu_context is written back by the
+API. The MU context is passed to other Messaging calls as a parameter.
+
+int mu_msg_set_callback(u32 mu_context, u8 reg, mu_msg_cb_t func);
+Setup the callback function for incoming messages. Callback can be setup for
+outbound 0, 1, or both outbound registers.
+
+int mu_msg_post(u32 mu_context, u32 val, u8 reg);
+Posting a message in the val parameter. The reg parameter denotes whether
+to use register 0, 1.
+
+int mu_msg_free(u32 mu_context, u8 mode);
+Free the usage of messaging component. mode can be specified soft or hard. In
+hardmode all resources are unallocated.
+
+Doorbell Component:
+The doorbell registers contains 1 inbound and 1 outbound. Depending on the bits
+being set different interrupts are asserted.
+
+int mu_db_request(u32 *mu_context);
+Request the usage of the doorbell register.
+
+int mu_db_set_callback(u32 mu_context, mu_db_cb_t func);
+Setting up the inbound callback.
+
+void mu_db_ring(u32 mu_context, u32 mask);
+Write to the outbound db register with mask.
+
+int mu_db_free(u32 mu_context);
+Free the usage of doorbell component.
+
+Circular Queues Component:
+The circular queue component has 4 circular queues. Inbound post, inbound free,
+outbound post, outbound free. These queues are used to pass messages.
+
+int mu_cq_request(u32 *mu_context, u32 q_size);
+Request the usage of the queue. See code comment header for q_size. It tells
+the API how big of queues to setup.
+
+int mu_cq_inbound_init(u32 mu_context, mfa_list_t *list, u32 size,
+ mu_cq_cb_t func);
+Init inbound queues. The user must provide a list of free message frames to
+be put in inbound free queue and the callback function to handle the inbound
+messages.
+
+int mu_cq_enable(u32 mu_context);
+Enables the circular queues mechanism. Called once all the setup functions
+are called.
+
+u32 mu_cq_get_frame(u32 mu_context);
+Obtain the address of an outbound free frame for the user.
+
+int mu_cq_post_frame(u32 mu_context, u32 mfa);
+The user can post the frame once getting the frame and put information in the
+frame.
+
+int mu_cq_free(u32 mu_context);
+Free the usage of circular queues mechanism.
+
+Index Registers Component:
+The index register provides the mechanism to receive inbound messages.
+
+int mu_ir_request(u32 *mu_context);
+Request of Index Register component usage.
+
+int mu_ir_set_callback(u32 mu_context, mu_ir_cb_t callback);
+Setting up callback for inbound messages. The callback will receive the
+value of the register that IAR offsets to.
+
+int mu_ir_free(u32 mu_context);
+Free the usage of Index Registers component.
+
+void mu_set_irq_threshold(u32 mu_context, int thresh);
+Setup the IRQ threshold before relinquish processing in IRQ space. Default
+is set at 10 loops.
+
+
+*NOTE: Example of host driver that utilize the MU can be found in the Linux I2O
+driver. Specifically i2o_pci and some functions of i2o_core. The I2O driver
+only utilize the circular queues mechanism. The other 3 components are simple
+enough that they can be easily setup. The MU API provides no flow control for
+the messaging mechanism. Flow control of the messaging needs to be established
+by a higher layer of software on the IOP or the host driver.
+
+All Disclaimers apply. Use this at your own discretion. Neither Intel nor I
+will be responsible if anything goes wrong. =)
+
+
+TODO
+____
+
diff -Nru a/Documentation/arm/XScale/IOP3XX/pmon.txt b/Documentation/arm/XScale/IOP3XX/pmon.txt
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/Documentation/arm/XScale/IOP3XX/pmon.txt Tue Mar 4 19:30:14 2003
@@ -0,0 +1,71 @@
+
+Intel's XScale Microarchitecture 80312 companion processor provides a
+Performance Monitoring Unit (PMON) that can be utilized to provide
+information that can be useful for fine tuning of code. This text
+file describes the API that's been developed for use by Linux kernel
+programmers. Note that to get the most usage out of the PMON,
+I highly reccomend getting the XScale reference manual from Intel[1]
+and looking at chapter 12.
+
+To use the PMON, you must #include in your
+source file.
+
+Since there's only one PMON, only one user can currently use the PMON
+at a given time. To claim the PMON for usage, call iop310_pmon_claim() which
+returns an identifier. When you are done using the PMON, call
+iop310_pmon_release() with the id you were given earlier.
+
+The PMON consists of 14 registers that can be used for performance measurements.
+By combining different statistics, you can derive complex performance metrics.
+
+To start the PMON, just call iop310_pmon_start(mode). Mode tells the PMON what
+statistics to capture and can each be one of:
+
+ IOP310_PMU_MODE0
+ Performance Monitoring Disabled
+
+ IOP310_PMU_MODE1
+ Primary PCI bus and internal agents (bridge, dma Ch0, dam Ch1, patu)
+
+ IOP310_PMU_MODE2
+ Secondary PCI bus and internal agents (bridge, dma Ch0, dam Ch1, patu)
+
+ IOP310_PMU_MODE3
+ Secondary PCI bus and internal agents (external masters 0..2 and Intel
+ 80312 I/O companion chip)
+
+ IOP310_PMU_MODE4
+ Secondary PCI bus and internal agents (external masters 3..5 and Intel
+ 80312 I/O companion chip)
+
+ IOP310_PMU_MODE5
+ Intel 80312 I/O companion chip internal bus, DMA Channels and Application
+ Accelerator
+
+ IOP310_PMU_MODE6
+ Intel 80312 I/O companion chip internal bus, PATU, SATU and Intel 80200
+ processor
+
+ IOP310_PMU_MODE7
+ Intel 80312 I/O companion chip internal bus, Primary PCI bus, Secondary
+ PCI bus and Secondary PCI agents (external masters 0..5 & Intel 80312 I/O
+ companion chip)
+
+To get the results back, call iop310_pmon_stop(&results) where results is
+defined as follows:
+
+typedef struct _iop310_pmon_result
+{
+ u32 timestamp; /* Global Time Stamp Register */
+ u32 timestamp_overflow; /* Time Stamp overflow count */
+ u32 event_count[14]; /* Programmable Event Counter
+ Registers 1-14 */
+ u32 event_overflow[14]; /* Overflow counter for PECR1-14 */
+} iop310_pmon_res_t;
+
+
+--
+This code is still under development, so please feel free to send patches,
+questions, comments, etc to me.
+
+Deepak Saxena
diff -Nru a/Documentation/arm/XScale/cache-lock.txt b/Documentation/arm/XScale/cache-lock.txt
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/Documentation/arm/XScale/cache-lock.txt Tue Mar 4 19:30:14 2003
@@ -0,0 +1,123 @@
+
+Intel's XScale Microarchitecture provides support for locking of data
+and instructions into the appropriate caches. This file provides
+an overview of the API that has been developed to take advantage of this
+feature from kernel space. Note that there is NO support for user space
+cache locking.
+
+For example usage of this code, grab:
+
+ ftp://source.mvista.com/pub/xscale/cache-test.c
+
+If you have any questions, comments, patches, etc, please contact me.
+
+Deepak Saxena
+
+API DESCRIPTION
+
+
+I. Header File
+
+ #include
+
+II. Cache Capability Discovery
+
+ SYNOPSIS
+
+ int cache_query(u8 cache_type,
+ struct cache_capabilities *pcache);
+
+ struct cache_capabilities
+ {
+ u32 flags; /* Flags defining capabilities */
+ u32 cache_size; /* Cache size in K (1024 bytes) */
+ u32 max_lock; /* Maximum lockable region in K */
+ }
+
+ /*
+ * Flags
+ */
+
+ /*
+ * Bit 0: Cache lockability
+ * Bits 1-31: Reserved for future use
+ */
+ #define CACHE_LOCKABLE 0x00000001 /* Cache can be locked */
+
+ /*
+ * Cache Types
+ */
+ #define ICACHE 0x00
+ #define DCACHE 0x01
+
+ DESCRIPTION
+
+ This function fills out the pcache capability identifier for the
+ requested cache. cache_type is either DCACHE or ICACHE. This
+ function is not very useful at the moment as all XScale CPU's
+ have the same size Cache, but is is provided for future XScale
+ based processors that may have larger cache sizes.
+
+ RETURN VALUE
+
+ This function returns 0 if no error occurs, otherwise it returns
+ a negative, errno compatible value.
+
+ -EIO Unknown hardware error
+
+III. Cache Locking
+
+ SYNOPSIS
+
+ int cache_lock(void *addr, u32 len, u8 cache_type, const char *desc);
+
+ DESCRIPTION
+
+ This function locks a physically contigous portion of memory starting
+ at the virtual address pointed to by addr into the cache referenced
+ by cache_type.
+
+ The address of the data/instruction that is to be locked must be
+ aligned on a cache line boundary (L1_CACHE_ALIGNEMENT).
+
+ The desc parameter is an optional (pass NULL if not used) human readable
+ descriptor of the locked memory region that is used by the cache
+ management code to build the /proc/cache_locks table.
+
+ Note that this function does not check whether the address is valid
+ or not before locking it into the cache. That duty is up to the
+ caller. Also, it does not check for duplicate or overlaping
+ entries.
+
+ RETURN VALUE
+
+ If the function is successful in locking the entry into cache, a
+ zero is returned.
+
+ If an error occurs, an appropriate error value is returned.
+
+ -EINVAL The memory address provided was not cache line aligned
+ -ENOMEM Could not allocate memory to complete operation
+ -ENOSPC Not enough space left on cache to lock in requested region
+ -EIO Unknown error
+
+III. Cache Unlocking
+
+ SYNOPSIS
+
+ int cache_unlock(void *addr)
+
+ DESCRIPTION
+
+ This function unlocks a portion of memory that was previously locked
+ into either the I or D cache.
+
+ RETURN VALUE
+
+ If the entry is cleanly unlocked from the cache, a 0 is returned.
+ In the case of an error, an appropriate error is returned.
+
+ -ENOENT No entry with given address associated with this cache
+ -EIO Unknown error
+
+
diff -Nru a/Documentation/arm/XScale/pmu.txt b/Documentation/arm/XScale/pmu.txt
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/Documentation/arm/XScale/pmu.txt Tue Mar 4 19:30:14 2003
@@ -0,0 +1,168 @@
+
+Intel's XScale Microarchitecture processors provide a Performance
+Monitoring Unit (PMU) that can be utilized to provide information
+that can be useful for fine tuning of code. This text file describes
+the API that's been developed for use by Linux kernel programmers.
+When I have some extra time on my hand, I will extend the code to
+provide support for user mode performance monitoring (which is
+probably much more useful). Note that to get the most usage out
+of the PMU, I highly reccomend getting the XScale reference manual
+from Intel and looking at chapter 12.
+
+To use the PMU, you must #include in your source file.
+
+Since there's only one PMU, only one user can currently use the PMU
+at a given time. To claim the PMU for usage, call pmu_claim() which
+returns an identifier. When you are done using the PMU, call
+pmu_release() with the identifier that you were given by pmu_claim.
+
+In addition, the PMU can only be used on XScale based systems that
+provide an external timer. Systems that the PMU is currently supported
+on are:
+
+ - Cyclone IQ80310
+
+Before delving into how to use the PMU code, let's do a quick overview
+of the PMU itself. The PMU consists of three registers that can be
+used for performance measurements. The first is the CCNT register with
+provides the number of clock cycles elapsed since the PMU was started.
+The next two register, PMN0 and PMN1, are eace user programmable to
+provide 1 of 20 different performance statistics. By combining different
+statistics, you can derive complex performance metrics.
+
+To start the PMU, just call pmu_start(pm0, pmn1). pmn0 and pmn1 tell
+the PMU what statistics to capture and can each be one of:
+
+EVT_ICACHE_MISS
+ Instruction fetches requiring access to external memory
+
+EVT_ICACHE_NO_DELIVER
+ Instruction cache could not deliver an instruction. Either an
+ ICACHE miss or an instruction TLB miss.
+
+EVT_ICACHE_DATA_STALL
+ Stall in execution due to a data dependency. This counter is
+ incremented each cycle in which the condition is present.
+
+EVT_ITLB_MISS
+ Instruction TLB miss
+
+EVT_DTLB_MISS
+ Data TLB miss
+
+EVT_BRANCH
+ A branch instruction was executed and it may or may not have
+ changed program flow
+
+EVT_BRANCH_MISS
+ A branch (B or BL instructions only) was mispredicted
+
+EVT_INSTRUCTION
+ An instruction was executed
+
+EVT_DCACHE_FULL_STALL
+ Stall because data cache buffers are full. Incremented on every
+ cycle in which condition is present.
+
+EVT_DCACHE_FULL_STALL_CONTIG
+ Stall because data cache buffers are full. Incremented on every
+ cycle in which condition is contigous.
+
+EVT_DCACHE_ACCESS
+ Data cache access (data fetch)
+
+EVT_DCACHE_MISS
+ Data cache miss
+
+EVT_DCACHE_WRITE_BACK
+ Data cache write back. This counter is incremented for every
+ 1/2 line (four words) that are written back.
+
+EVT_PC_CHANGED
+ Software changed the PC. This is incremented only when the
+ software changes the PC and there is no mode change. For example,
+ a MOV instruction that targets the PC would increment the counter.
+ An SWI would not as it triggers a mode change.
+
+EVT_BCU_REQUEST
+ The Bus Control Unit(BCU) received a request from the core
+
+EVT_BCU_FULL
+ The BCU request queue if full. A high value for this event means
+ that the BCU is often waiting for to complete on the external bus.
+
+EVT_BCU_DRAIN
+ The BCU queues were drained due to either a Drain Write Buffer
+ command or an I/O transaction for a page that was marked as
+ uncacheable and unbufferable.
+
+EVT_BCU_ECC_NO_ELOG
+ The BCU detected an ECC error on the memory bus but noe ELOG
+ register was available to to log the errors.
+
+EVT_BCU_1_BIT_ERR
+ The BCU detected a 1-bit error while reading from the bus.
+
+EVT_RMW
+ An RMW cycle occurred due to narrow write on ECC protected memory.
+
+To get the results back, call pmu_stop(&results) where results is defined
+as a struct pmu_results:
+
+ struct pmu_results
+ {
+ u32 ccnt; /* Clock Counter Register */
+ u32 ccnt_of; /
+ u32 pmn0; /* Performance Counter Register 0 */
+ u32 pmn0_of;
+ u32 pmn1; /* Performance Counter Register 1 */
+ u32 pmn1_of;
+ };
+
+Pretty simple huh? Following are some examples of how to get some commonly
+wanted numbers out of the PMU data. Note that since you will be dividing
+things, this isn't super useful from the kernel and you need to printk the
+data out to syslog. See [1] for more examples.
+
+Instruction Cache Efficiency
+
+ pmu_start(EVT_INSTRUCTION, EVT_ICACHE_MISS);
+ ...
+ pmu_stop(&results);
+
+ icache_miss_rage = results.pmn1 / results.pmn0;
+ cycles_per_instruction = results.ccnt / results.pmn0;
+
+Data Cache Efficiency
+
+ pmu_start(EVT_DCACHE_ACCESS, EVT_DCACHE_MISS);
+ ...
+ pmu_stop(&results);
+
+ dcache_miss_rage = results.pmn1 / results.pmn0;
+
+Instruction Fetch Latency
+
+ pmu_start(EVT_ICACHE_NO_DELIVER, EVT_ICACHE_MISS);
+ ...
+ pmu_stop(&results);
+
+ average_stall_waiting_for_instruction_fetch =
+ results.pmn0 / results.pmn1;
+
+ percent_stall_cycles_due_to_instruction_fetch =
+ results.pmn0 / results.ccnt;
+
+
+ToDo:
+
+- Add support for usermode PMU usage. This might require hooking into
+ the scheduler so that we pause the PMU when the task that requested
+ statistics is scheduled out.
+
+--
+This code is still under development, so please feel free to send patches,
+questions, comments, etc to me.
+
+Deepak Saxena
+
diff -Nru a/Documentation/arm/XScale/tlb-lock.txt b/Documentation/arm/XScale/tlb-lock.txt
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/Documentation/arm/XScale/tlb-lock.txt Tue Mar 4 19:30:14 2003
@@ -0,0 +1,64 @@
+
+Intel's XScale Microarchitecture provides support for locking of TLB
+entries in both the instruction and data TLBs. This file provides
+an overview of the API that has been developed to take advantage of this
+feature from kernel space. Note that there is NO support for user space.
+
+In general, this feature should be used in conjunction with locking
+data or instructions into the appropriate caches. See the file
+cache-lock.txt in this directory.
+
+If you have any questions, comments, patches, etc, please contact me.
+
+Deepak Saxena
+
+
+API DESCRIPTION
+
+I. Header file
+
+ #include
+
+II. Locking an entry into the TLB
+
+ SYNOPSIS
+
+ xscale_tlb_lock(u8 tlb_type, u32 addr);
+
+ /*
+ * TLB types
+ */
+ #define ITLB 0x0
+ #define DTLB 0x1
+
+ DESCRIPTION
+
+ This function locks the virtual to physical mapping for virtual
+ address addr into the requested TLB.
+
+ RETURN VALUE
+
+ If the entry is properly locked into the TLB, a 0 is returned.
+ In case of an error, an appropriate error is returned.
+
+ -ENOSPC No more entries left in the TLB
+ -EIO Unknown error
+
+III. Unlocking an entry from a TLB
+
+ SYNOPSIS
+
+ xscale_tlb_unlock(u8 tlb_type, u32 addr);
+
+ DESCRIPTION
+
+ This function unlocks the entry for virtual address addr from the
+ specified cache.
+
+ RETURN VALUE
+
+ If the TLB entry is properly unlocked, a 0 is returned.
+ In case of an error, an appropriate error is returned.
+
+ -ENOENT No entry for given address in specified TLB
+
diff -Nru a/Documentation/block/biodoc.txt b/Documentation/block/biodoc.txt
--- a/Documentation/block/biodoc.txt Tue Mar 4 19:30:12 2003
+++ b/Documentation/block/biodoc.txt Tue Mar 4 19:30:12 2003
@@ -1038,7 +1038,7 @@
in fact all queues get unplugged as a side-effect.
Aside:
- This is kind of controversial territory, as its not clear if plugging is
+ This is kind of controversial territory, as it's not clear if plugging is
always the right thing to do. Devices typically have their own queues,
and allowing a big queue to build up in software, while letting the device be
idle for a while may not always make sense. The trick is to handle the fine
diff -Nru a/Documentation/cpu-freq/core.txt b/Documentation/cpu-freq/core.txt
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/Documentation/cpu-freq/core.txt Tue Mar 4 19:30:12 2003
@@ -0,0 +1,90 @@
+ CPU frequency and voltage scaling code in the Linux(TM) kernel
+
+
+ L i n u x C P U F r e q
+
+ C P U F r e q C o r e
+
+
+ Dominik Brodowski
+ David Kimdon
+
+
+
+ Clock scaling allows you to change the clock speed of the CPUs on the
+ fly. This is a nice method to save battery power, because the lower
+ the clock speed, the less power the CPU consumes.
+
+
+Contents:
+---------
+1. CPUFreq core and interfaces
+2. CPUFreq notifiers
+
+1. General Information
+=======================
+
+The CPUFreq core code is located in linux/kernel/cpufreq.c. This
+cpufreq code offers a standardized interface for the CPUFreq
+architecture drivers (those pieces of code that do actual
+frequency transitions), as well as to "notifiers". These are device
+drivers or other part of the kernel that need to be informed of
+policy changes (ex. thermal modules like ACPI) or of all
+frequency changes (ex. timing code) or even need to force certain
+speed limits (like LCD drivers on ARM architecture). Additionally, the
+kernel "constant" loops_per_jiffy is updated on frequency changes
+here.
+
+
+2. CPUFreq notifiers
+====================
+
+CPUFreq notifiers conform to the standard kernel notifier interface.
+See linux/include/linux/notifier.h for details on notifiers.
+
+There are two different CPUFreq notifiers - policy notifiers and
+transition notifiers.
+
+
+2.1 CPUFreq policy notifiers
+----------------------------
+
+These are notified when a new policy is intended to be set. Each
+CPUFreq policy notifier is called three times for a policy transition:
+
+1.) During CPUFREQ_ADJUST all CPUFreq notifiers may change the limit if
+ they see a need for this - may it be thermal considerations or
+ hardware limitations.
+
+2.) During CPUFREQ_INCOMPATIBLE only changes may be done in order to avoid
+ hardware failure.
+
+3.) And during CPUFREQ_NOTIFY all notifiers are informed of the new policy
+ - if two hardware drivers failed to agree on a new policy before this
+ stage, the incompatible hardware shall be shut down, and the user
+ informed of this.
+
+The phase is specified in the second argument to the notifier.
+
+The third argument, a void *pointer, points to a struct cpufreq_policy
+consisting of five values: cpu, min, max, policy and max_cpu_freq. min
+and max are the lower and upper frequencies (in kHz) of the new
+policy, policy the new policy, cpu the number of the affected CPU or
+CPUFREQ_ALL_CPUS for all CPUs; and max_cpu_freq the maximum supported
+CPU frequency. This value is given for informational purposes only.
+
+
+2.2 CPUFreq transition notifiers
+--------------------------------
+
+These are notified twice when the CPUfreq driver switches the CPU core
+frequency and this change has any external implications.
+
+The second argument specifies the phase - CPUFREQ_PRECHANGE or
+CPUFREQ_POSTCHANGE.
+
+The third argument is a struct cpufreq_freqs with the following
+values:
+cpu - number of the affected CPU or CPUFREQ_ALL_CPUS
+old - old frequency
+new - new frequency
diff -Nru a/Documentation/cpu-freq/cpu-drivers.txt b/Documentation/cpu-freq/cpu-drivers.txt
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/Documentation/cpu-freq/cpu-drivers.txt Tue Mar 4 19:30:14 2003
@@ -0,0 +1,207 @@
+ CPU frequency and voltage scaling code in the Linux(TM) kernel
+
+
+ L i n u x C P U F r e q
+
+ C P U D r i v e r s
+
+ - information for developers -
+
+
+ Dominik Brodowski
+
+
+
+ Clock scaling allows you to change the clock speed of the CPUs on the
+ fly. This is a nice method to save battery power, because the lower
+ the clock speed, the less power the CPU consumes.
+
+
+Contents:
+---------
+1. What To Do?
+1.1 Initialization
+1.2 Per-CPU Initialization
+1.3 verify
+1.4 target or setpolicy?
+1.5 target
+1.6 setpolicy
+2. Frequency Table Helpers
+
+
+
+1. What To Do?
+==============
+
+So, you just got a brand-new CPU / chipset with datasheets and want to
+add cpufreq support for this CPU / chipset? Great. Here are some hints
+on what is neccessary:
+
+
+1.1 Initialization
+------------------
+
+First of all, in an __initcall level 7 or later (preferrably
+module_init() so that your driver is modularized) function check
+whether this kernel runs on the right CPU and the right chipset. If
+so, register a struct cpufreq_driver with the CPUfreq core using
+cpufreq_register_driver()
+
+What shall this struct cpufreq_driver contain?
+
+cpufreq_driver.name - The name of this driver.
+
+cpufreq_driver.init - A pointer to the per-CPU initialization
+ function.
+
+cpufreq_driver.verify - A pointer to a "verfication" funciton.
+
+cpufreq_driver.setpolicy _or_
+cpufreq_driver.target - See below on the differences.
+
+And optionally
+
+cpufreq_driver.exit - A pointer to a per-CPU cleanup function.
+
+
+
+1.2 Per-CPU Initialization
+--------------------------
+
+Whenever a new CPU is registered with the device model, or after the
+cpufreq driver registers itself, the per-CPU initialization fucntion
+cpufreq_driver.init is called. It takes a struct cpufreq_policy
+*policy as argument. What to do now?
+
+If necessary, activate the CPUfreq support on your CPU (unlock that
+register etc.).
+
+Then, the driver must fill in the following values:
+
+policy->cpuinfo.min_freq _and_
+policy->cpuinfo.max_freq - the minimum and maximum frequency
+ (in kHz) which is supported by
+ this CPU
+policy->cpuinfo.transition_latency the time it takes on this CPU to
+ switch between two frequencies (if
+ appropriate, else specify
+ CPUFREQ_ETERNAL)
+
+policy->cur The current operating frequency of
+ this CPU (if appropriate)
+policy->min,
+policy->max,
+policy->policy and, if neccessary,
+policy->governor must contain the "default policy" for
+ this CPU. A few moments later,
+ cpufreq_driver.verify and either
+ cpufreq_driver.setpolicy or
+ cpufreq_driver.target is called with
+ these values.
+
+For setting some of these values, the frequency table helpers might be
+helpful. See the section 2 for more information on them.
+
+
+1.3 verify
+------------
+
+When the user decides a new policy (consisting of
+"policy,governor,min,max") shall be set, this policy must be validated
+so that incompatible values can be corrected. For verifying these
+values, a frequency table helper and/or the
+cpufreq_verify_within_limits(struct cpufreq_policy *policy, unsigned
+int min_freq, unsigned int max_freq) function might be helpful. See
+section 2 for details on frequency table helpers.
+
+You need to make sure that at least one valid frequency (or operating
+range) is within policy->min and policy->max. If necessary, increase
+policy->max fist, and only if this is no solution, decreas policy->min.
+
+
+1.4 target or setpolicy?
+----------------------------
+
+Most cpufreq drivers or even most cpu frequency scaling algorithms
+only allow the CPU to be set to one frequency. For these, you use the
+->target call.
+
+Some cpufreq-capable processors switch the frequency between certain
+limits on their own. These shall use the ->setpolicy call
+
+
+1.4. target
+-------------
+
+The target call has three arguments: struct cpufreq_policy *policy,
+unsigned int target_frequency, unsigned int relation.
+
+The CPUfreq driver must set the new frequency when called here. The
+actual frequency must be determined using the following rules:
+
+- keep close to "target_freq"
+- policy->min <= new_freq <= policy->max (THIS MUST BE VALID!!!)
+- if relation==CPUFREQ_REL_L, try to select a new_freq higher than or equal
+ target_freq. ("L for lowest, but no lower than")
+- if relation==CPUFREQ_REL_H, try to select a new_freq lower than or equal
+ target_freq. ("H for highest, but no higher than")
+
+Here again the frequency table helper might assist you - see section 3
+for details.
+
+
+1.5 setpolicy
+---------------
+
+The setpolicy call only takes a struct cpufreq_policy *policy as
+argument. You need to set the lower limit of the in-processor or
+in-chipset dynamic frequency switching to policy->min, the upper limit
+to policy->max, and -if supported- select a performance-oriented
+setting when policy->policy is CPUFREQ_POLICY_PERFORMANCE, and a
+powersaving-oriented setting when CPUFREQ_POLICY_POWERSAVE. Also check
+the reference implementation in arch/i386/kernel/cpu/cpufreq/longrun.c
+
+
+
+2. Frequency Table Helpers
+==========================
+
+As most cpufreq processors only allow for being set to a few specific
+frequencies, a "frequency table" with some functions might assist in
+some work of the processor driver. Such a "frequency table" consists
+of an array of struct cpufreq_freq_table entries, with any value in
+"index" you want to use, and the corresponding frequency in
+"frequency". At the end of the table, you need to add a
+cpufreq_freq_table entry with frequency set to CPUFREQ_TABLE_END. And
+if you want to skip one entry in the table, set the frequency to
+CPUFREQ_ENTRY_INVALID. The entries don't need to be in ascending
+order.
+
+By calling cpufreq_frequency_table_cpuinfo(struct cpufreq_policy *policy,
+ struct cpufreq_frequency_table *table);
+the cpuinfo.min_freq and cpuinfo.max_freq values are detected, and
+policy->min and policy->max are set to the same values. This is
+helpful for the per-CPU initialization stage.
+
+int cpufreq_frequency_table_verify(struct cpufreq_policy *policy,
+ struct cpufreq_frequency_table *table);
+assures that at least one valid frequency is within policy->min and
+policy->max, and all other criteria are met. This is helpful for the
+->verify call.
+
+int cpufreq_frequency_table_target(struct cpufreq_policy *policy,
+ struct cpufreq_frequency_table *table,
+ unsigned int target_freq,
+ unsigned int relation,
+ unsigned int *index);
+
+is the corresponding frequency table helper for the ->target
+stage. Just pass the values to this function, and the unsigned int
+index returns the number of the frequency table entry which contains
+the frequency the CPU shall be set to. PLEASE NOTE: This is not the
+"index" which is in this cpufreq_table_entry.index, but instead
+cpufreq_table[index]. So, the new frequency is
+cpufreq_table[index].frequency, and the value you stored into the
+frequency table "index" field is
+cpufreq_table[index].index.
+
diff -Nru a/Documentation/cpu-freq/governors.txt b/Documentation/cpu-freq/governors.txt
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/Documentation/cpu-freq/governors.txt Tue Mar 4 19:30:14 2003
@@ -0,0 +1,155 @@
+ CPU frequency and voltage scaling code in the Linux(TM) kernel
+
+
+ L i n u x C P U F r e q
+
+ C P U F r e q G o v e r n o r s
+
+ - information for users and developers -
+
+
+ Dominik Brodowski
+
+
+
+ Clock scaling allows you to change the clock speed of the CPUs on the
+ fly. This is a nice method to save battery power, because the lower
+ the clock speed, the less power the CPU consumes.
+
+
+Contents:
+---------
+1. What is a CPUFreq Governor?
+
+2. Governors In the Linux Kernel
+2.1 Performance
+2.2 Powersave
+2.3 Userspace
+
+3. The Governor Interface in the CPUfreq Core
+
+
+
+1. What Is A CPUFreq Governor?
+==============================
+
+Most cpufreq drivers (in fact, all except one, longrun) or even most
+cpu frequency scaling algorithms only offer the CPU to be set to one
+frequency. In order to offer dynamic frequency scaling, the cpufreq
+core must be able to tell these drivers of a "target frequency". So
+these specific drivers will be transformed to offer a "->target"
+call instead of the existing "->setpolicy" call. For "longrun", all
+stays the same, though.
+
+How to decide what frequency within the CPUfreq policy should be used?
+That's done using "cpufreq governors". Two are already in this patch
+-- they're the already existing "powersave" and "performance" which
+set the frequency statically to the lowest or highest frequency,
+respectively. At least two more such governors will be ready for
+addition in the near future, but likely many more as there are various
+different theories and models about dynamic frequency scaling
+around. Using such a generic interface as cpufreq offers to scaling
+governors, these can be tested extensively, and the best one can be
+selected for each specific use.
+
+Basically, it's the following flow graph:
+
+CPU can be set to switch independetly | CPU can only be set
+ within specific "limits" | to specific frequencies
+
+ "CPUfreq policy"
+ consists of frequency limits (policy->{min,max})
+ and CPUfreq governor to be used
+ / \
+ / \
+ / the cpufreq governor decides
+ / (dynamically or statically)
+ / what target_freq to set within
+ / the limits of policy->{min,max}
+ / \
+ / \
+ Using the ->setpolicy call, Using the ->target call,
+ the limits and the the frequency closest
+ "policy" is set. to target_freq is set.
+ It is assured that it
+ is within policy->{min,max}
+
+
+2. Governors In the Linux Kernel
+================================
+
+2.1 Performance
+---------------
+
+The CPUfreq governor "performance" sets the CPU statically to the
+highest frequency within the borders of scaling_min_freq and
+scaling_max_freq.
+
+
+2.1 Powersave
+-------------
+
+The CPUfreq governor "powersave" sets the CPU statically to the
+lowest frequency within the borders of scaling_min_freq and
+scaling_max_freq.
+
+
+2.2 Userspace
+-------------
+
+The CPUfreq governor "userspace" allows the user, or any userspace
+program running with UID "root", to set the CPU to a specifc frequency
+by making a sysfs file "scaling_setspeed" available in the CPU-device
+directory.
+
+
+
+3. The Governor Interface in the CPUfreq Core
+=============================================
+
+A new governor must register itself with the CPUfreq core using
+"cpufreq_register_governor". The struct cpufreq_governor, which has to
+be passed to that function, must contain the following values:
+
+governor->name - A unique name for this governor
+governor->governor - The governor callback function
+governor->owner - .THIS_MODULE for the governor module (if
+ appropriate)
+
+The governor->governor callback is called with the current (or to-be-set)
+cpufreq_policy struct for that CPU, and an unsigned int event. The
+following events are currently defined:
+
+CPUFREQ_GOV_START: This governor shall start its duty for the CPU
+ policy->cpu
+CPUFREQ_GOV_STOP: This governor shall end its duty for the CPU
+ policy->cpu
+CPUFREQ_GOV_LIMITS: The limits for CPU policy->cpu have changed to
+ policy->min and policy->max.
+
+If you need other "events" externally of your driver, _only_ use the
+cpufreq_governor_l(unsigned int cpu, unsigned int event) call to the
+CPUfreq core to ensure proper locking.
+
+
+The CPUfreq governor may call the CPU processor driver using one of
+these two functions:
+
+inline int cpufreq_driver_target(struct cpufreq_policy *policy,
+ unsigned int target_freq,
+ unsigned int relation);
+
+inline int cpufreq_driver_target_l(struct cpufreq_policy *policy,
+ unsigned int target_freq,
+ unsigned int relation);
+
+target_freq must be within policy->min and policy->max, of course.
+What's the difference between these two functions? When your governor
+still is in a direct code path of a call to governor->governor, the
+cpufreq_driver_sem lock is still held in the cpufreq core, and there's
+no need to lock it again (in fact, this would cause a deadlock). So
+use cpufreq_driver_target only in these cases. In all other cases (for
+example, when there's a "daemonized" function that wakes up every
+second), use cpufreq_driver_target_l to lock the cpufreq_driver_sem
+before the command is passed to the cpufreq processor driver.
+
diff -Nru a/Documentation/cpu-freq/index.txt b/Documentation/cpu-freq/index.txt
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/Documentation/cpu-freq/index.txt Tue Mar 4 19:30:14 2003
@@ -0,0 +1,56 @@
+ CPU frequency and voltage scaling code in the Linux(TM) kernel
+
+
+ L i n u x C P U F r e q
+
+
+
+
+ Dominik Brodowski
+
+
+
+ Clock scaling allows you to change the clock speed of the CPUs on the
+ fly. This is a nice method to save battery power, because the lower
+ the clock speed, the less power the CPU consumes.
+
+
+
+Documents in this directory:
+----------------------------
+core.txt - General description of the CPUFreq core and
+ of CPUFreq notifiers
+
+cpu-drivers.txt - How to implement a new cpufreq processor driver
+
+governors.txt - What are cpufreq governors and how to
+ implement them?
+
+index.txt - File index, Mailing list and Links (this document)
+
+user-guide.txt - User Guide to CPUFreq
+
+
+Mailing List
+------------
+There is a CPU frequency changing CVS commit and general list where
+you can report bugs, problems or submit patches. To post a message,
+send an email to cpufreq@www.linux.org.uk, to subscribe go to
+http://www.linux.org.uk/mailman/listinfo/cpufreq. Previous post to the
+mailing list are available to subscribers at
+http://www.linux.org.uk/mailman/private/cpufreq/.
+
+
+Links
+-----
+the FTP archives:
+* ftp://ftp.linux.org.uk/pub/linux/cpufreq/
+
+how to access the CVS repository:
+* http://cvs.arm.linux.org.uk/
+
+the CPUFreq Mailing list:
+* http://www.linux.org.uk/mailman/listinfo/cpufreq
+
+Clock and voltage scaling for the SA-1100:
+* http://www.lart.tudelft.nl/projects/scaling
diff -Nru a/Documentation/cpu-freq/user-guide.txt b/Documentation/cpu-freq/user-guide.txt
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/Documentation/cpu-freq/user-guide.txt Tue Mar 4 19:30:14 2003
@@ -0,0 +1,166 @@
+ CPU frequency and voltage scaling code in the Linux(TM) kernel
+
+
+ L i n u x C P U F r e q
+
+ U S E R G U I D E
+
+
+ Dominik Brodowski
+
+
+
+ Clock scaling allows you to change the clock speed of the CPUs on the
+ fly. This is a nice method to save battery power, because the lower
+ the clock speed, the less power the CPU consumes.
+
+
+Contents:
+---------
+1. Supported Architectures and Processors
+1.1 ARM
+1.2 x86
+1.3 sparc64
+
+2. "Policy" / "Governor"?
+2.1 Policy
+2.2 Governor
+
+3. How to change the CPU cpufreq policy and/or speed
+3.1 Preferred interface: sysfs
+3.2 Deprecated interfaces
+
+
+
+1. Supported Architectures and Processors
+=========================================
+
+1.1 ARM
+-------
+
+The following ARM processors are supported by cpufreq:
+
+ARM Integrator
+ARM-SA1100
+ARM-SA1110
+
+
+1.2 x86
+-------
+
+The following processors for the x86 architecture are supported by cpufreq:
+
+AMD Elan - SC400, SC410
+AMD mobile K6-2+
+AMD mobile K6-3+
+Cyrix Media GXm
+Intel mobile PIII [*] and Intel mobile PIII-M on certain chipsets
+Intel Pentium 4, Intel Xeon
+National Semiconductors Geode GX
+Transmeta Crusoe
+varios processors on some ACPI 2.0-compatible systems [**]
+
+[*] only certain Intel mobile PIII processors are supported. If you
+know that you own a speedstep-capable processor, pass the option
+"speedstep_coppermine=1" to the module speedstep.o
+
+[**] Only if "ACPI Processor Performance States" are available
+to the ACPI<->BIOS interface.
+
+
+1.3 sparc64
+-----------
+
+The following processors for the sparc64 architecture are supported by
+cpufreq:
+
+UltraSPARC-III
+
+
+
+2. "Policy" / "Governor" ?
+==========================
+
+Some CPU frequency scaling-capable processor switch between varios
+frequencies and operating voltages "on the fly" without any kernel or
+user involvement. This guarantuees very fast switching to a frequency
+which is high enough to serve the user's needs, but low enough to save
+power.
+
+
+2.1 Policy
+----------
+
+On these systems, all you can do is select the lower and upper
+frequency limit as well as whether you want more aggressive
+power-saving or more instantly avaialble processing power.
+
+
+2.2 Governor
+------------
+
+On all other cpufreq implementations, these boundaries still need to
+be set. Then, a "governor" must be selected. Such a "governor" decides
+what speed the processor shall run within the boundaries. One such
+"governor" is the "userspace" governor. This one allows the user - or
+a yet-to-implement userspace program - to decide what specific speed
+the processor shall run at.
+
+
+3. How to change the CPU cpufreq policy and/or speed
+====================================================
+
+3.1 Preferred Interface: sysfs
+------------------------------
+
+The preferred interface is located in the sysfs filesystem. If you
+mounted it at /sys, the cpufreq interface is located in the
+cpu-device directory (e.g. /sys/devices/sys/cpu0/ for the first
+CPU).
+
+cpuinfo_min_freq : this file shows the minimum operating
+ frequency the processor can run at(in kHz)
+cpuinfo_max_freq : this file shows the maximum operating
+ frequency the processor can run at(in kHz)
+scaling_driver : this file shows what cpufreq driver is
+ used to set the frequency on this CPU
+
+available_scaling_governors : this file shows the CPUfreq governors
+ available in this kernel. You can see the
+ currently activated governor in
+
+scaling_governor, and by "echoing" the name of another
+ governor you can change it. Please note
+ that some governors won't load - they only
+ work on some specific architectures or
+ processors.
+scaling_min_freq and
+scaling_max_freq show the current "policy limits" (in
+ kHz). By echoing new values into these
+ files, you can change these limits.
+
+
+If you have selected the "userspace" governor which allows you to
+set the CPU operating frequency to a specific value, you can read out
+the current frequency in
+
+scaling_setspeed. By "echoing" a new frequency into this
+ you can change the speed of the CPU,
+ but only within the limits of
+ scaling_min_freq and scaling_max_freq.
+
+
+3.2 Deprecated Interfaces
+-------------------------
+
+Depending on your kernel configuration, you might find the following
+cpufreq-related files:
+/proc/cpufreq
+/proc/sys/cpu/*/speed
+/proc/sys/cpu/*/speed-min
+/proc/sys/cpu/*/speed-max
+
+These are files for deprecated interfaces to cpufreq, which offer far
+less functionality. Because of this, these interfaces aren't described
+here.
+
diff -Nru a/Documentation/cpufreq b/Documentation/cpufreq
--- a/Documentation/cpufreq Tue Mar 4 19:30:12 2003
+++ /dev/null Wed Dec 31 16:00:00 1969
@@ -1,364 +0,0 @@
- CPU frequency and voltage scaling code in the Linux(TM) kernel
-
-
- L i n u x C P U F r e q
-
-
-
-
- Dominik Brodowski
- David Kimdon
-
-
-
- Clock scaling allows you to change the clock speed of the CPUs on the
- fly. This is a nice method to save battery power, because the lower
- the clock speed, the less power the CPU consumes.
-
-
-
-Contents:
----------
-1. Supported architectures
-2. User interface
-2.1 /proc/cpufreq interface [2.6]
-2.2. /proc/sys/cpu/ interface [2.4]
-3. CPUFreq core and interfaces
-3.1 General information
-3.2 CPUFreq notifiers
-3.3 CPUFreq architecture drivers
-4. Mailing list and Links
-
-
-
-1. Supported architectures
-==========================
-
-ARM:
- ARM Integrator, SA 1100, SA1110
---------------------------------
- This driver will be ported to new CPUFreq core soon, so
- far it will not work.
-
-
-AMD Elan:
- SC400, SC410
---------------------------------
- You need to specify the highest allowed CPU frequency as
- a module parameter ("max_freq") or as boot parameter
- ("elanfreq="). Else the available speed range will be
- limited to the speed at which the CPU runs while this
- module is loaded.
-
-
-VIA Cyrix Longhaul:
- VIA Samuel/CyrixIII, VIA Cyrix Samuel/C3,
- VIA Cyrix Ezra, VIA Cyrix Ezra-T
---------------------------------
- If you do not want to scale the Front Side Bus or voltage,
- pass the module parameter "dont_scale_fsb 1" or
- "dont_scale_voltage 1". Additionally, it is advised that
- you pass the current Front Side Bus speed (in MHz) to
- this module as module parameter "current_fsb", e.g.
- "current_fsb 133" for a Front Side Bus speed of 133 MHz.
-
-
-Intel SpeedStep:
- certain mobile Intel Pentium III (Coppermine), and all mobile
- Intel Pentium III-M (Tualatin) and mobile Intel Pentium 4 P4-Ms.
---------------------------------
- Unfortunately only modern Intel ICH2-M and ICH3-M chipsets are
- supported.
-
-
-P4 CPU Clock Modulation:
- Intel Pentium 4 Xeon processors
----------------------------------
- Note that you can only switch the speed of two logical CPUs at
- once - but each phyiscal CPU may have different throttling levels.
-
-
-PowerNow! K6:
- mobile AMD K6-2+ / mobile K6-3+:
---------------------------------
- No known issues.
-
-
-Transmeta Crusoe Longrun:
- Transmeta Crusoe processors:
---------------------------------
- It is recommended to use the 2.6. /proc/cpufreq interface when
- using this driver
-
-
-
-2. User Interface
-=================
-
-2.1 /proc/cpufreq interface [2.6]
-***********************************
-
-Starting in the patches for kernel 2.5.33, CPUFreq uses a "policy"
-interface /proc/cpufreq.
-
-When you "cat" this file, you'll find something like:
-
---
- minimum CPU frequency - maximum CPU frequency - policy
-CPU 0 1200000 ( 75%) - 1600000 (100%) - performance
---
-
-This means the current policy allows this CPU to be run anywhere
-between 1.2 GHz (the value is in kHz) and 1.6 GHz with an eye towards
-performance.
-
-To change the policy, "echo" the desired new policy into
-/proc/cpufreq. Use one of the following formats:
-
-cpu_nr:min_freq:max_freq:policy
-cpu_nr%min_freq%max_freq%policy
-min_freq:max_freq:policy
-min_freq%max_freq%policy
-
-with cpu_nr being the CPU which shall be affected, min_freq and
-max_freq the lower and upper limit of the CPU core frequency in kHz,
-and policy either "performance" or "powersave".
-A few examples:
-
-root@notebook:#echo -n "0:0:0:powersave" > /proc/cpufreq
- sets the CPU #0 to the lowest supported frequency.
-
-root@notebook:#echo -n "1%100%100%performance" > /proc/cpufreq
- sets the CPU #1 to the highest supported frequency.
-
-root@notebook:#echo -n "1000000:2000000:performance" > /proc/cpufreq
- to set the frequency of all CPUs between 1 GHz and 2 GHz and to
- the policy "performance".
-
-Please note that the values you "echo" into /proc/cpufreq are
-validated first, and may be limited by hardware or thermal
-considerations. Because of this, a read from /proc/cpufreq might
-differ from what was written into it.
-
-
-When you read /proc/cpufreq for the first time after a CPUFreq driver
-has been initialized, you'll see the "default policy" for this
-driver. If this does not suit your needs, you can pass a boot
-parameter to the cpufreq core. Use the following syntax for this:
- "cpufreq=min_freq:max_freq:policy", i.e. you may not chose a
-specific CPU and you need to specify the limits in kHz and not in
-per cent.
-
-
-2.2 /proc/cpufreq interface [2.4]
-***********************************
-
-Previsiously (and still available as a config option), CPUFreq used
-a "sysctl" interface which is located in
- /proc/sys/cpu/0/
- /proc/sys/cpu/1/ ... (SMP only)
-
-In these directories, you will find three files of importance for
-CPUFreq: speed-max, speed-min and speed:
-
-speed shows the current CPU frequency in kHz,
-speed-min the minimum supported CPU frequency, and
-speed-max the maximum supported CPU frequency.
-
-
-To change the CPU frequency, "echo" the desired CPU frequency (in kHz)
-to speed. For example, to set the CPU speed to the lowest/highest
-allowed frequency do:
-
-root@notebook:# cat /proc/sys/cpu/0/speed-min > /proc/sys/cpu/0/speed
-root@notebook:# cat /proc/sys/cpu/0/speed-max > /proc/sys/cpu/0/speed
-
-
-
-3. CPUFreq core and interfaces
-===============================
-
-3.1 General information
-*************************
-
-The CPUFreq core code is located in linux/kernel/cpufreq.c. This
-cpufreq code offers a standardized interface for the CPUFreq
-architecture drivers (those pieces of code that do actual
-frequency transitions), as well as to "notifiers". These are device
-drivers or other part of the kernel that need to be informed of
-policy changes (like thermal modules like ACPI) or of all
-frequency changes (like timing code) or even need to force certain
-speed limits (like LCD drivers on ARM architecture). Additionally, the
-kernel "constant" loops_per_jiffy is updated on frequency changes
-here.
-
-
-3.2 CPUFreq notifiers
-***********************
-
-CPUFreq notifiers conform to the standard kernel notifier interface.
-See linux/include/linux/notifier.h for details on notifiers.
-
-There are two different CPUFreq notifiers - policy notifiers and
-transition notifiers.
-
-
-3.2.1 CPUFreq policy notifiers
-******************************
-
-These are notified when a new policy is intended to be set. Each
-CPUFreq policy notifier is called three times for a policy transition:
-
-1.) During CPUFREQ_ADJUST all CPUFreq notifiers may change the limit if
- they see a need for this - may it be thermal considerations or
- hardware limitations.
-
-2.) During CPUFREQ_INCOMPATIBLE only changes may be done in order to avoid
- hardware failure.
-
-3.) And during CPUFREQ_NOTIFY all notifiers are informed of the new policy
- - if two hardware drivers failed to agree on a new policy before this
- stage, the incompatible hardware shall be shut down, and the user
- informed of this.
-
-The phase is specified in the second argument to the notifier.
-
-The third argument, a void *pointer, points to a struct cpufreq_policy
-consisting of five values: cpu, min, max, policy and max_cpu_freq. Min
-and max are the lower and upper frequencies (in kHz) of the new
-policy, policy the new policy, cpu the number of the affected CPU or
-CPUFREQ_ALL_CPUS for all CPUs; and max_cpu_freq the maximum supported
-CPU frequency. This value is given for informational purposes only.
-
-
-3.2.2 CPUFreq transition notifiers
-**********************************
-
-These are notified twice when the CPUfreq driver switches the CPU core
-frequency and this change has any external implications.
-
-The second argument specifies the phase - CPUFREQ_PRECHANGE or
-CPUFREQ_POSTCHANGE.
-
-The third argument is a struct cpufreq_freqs with the following
-values:
-cpu - number of the affected CPU or CPUFREQ_ALL_CPUS
-old - old frequency
-new - new frequency
-
-
-3.3 CPUFreq architecture drivers
-**********************************
-
-CPUFreq architecture drivers are the pieces of kernel code that
-actually perform CPU frequency transitions. These need to be
-initialized separately (separate initcalls), and may be
-modularized. They interact with the CPUFreq core in the following way:
-
-cpufreq_register()
-------------------
-cpufreq_register registers an arch driver to the CPUFreq core. Please
-note that only one arch driver may be registered at any time. -EBUSY
-is returned when an arch driver is already registered. The argument to
-cpufreq_register, struct cpufreq_driver *driver, is described later.
-
-cpufreq_unregister()
---------------------
-cpufreq_unregister unregisters an arch driver, e.g. on module
-unloading. Please note that there is no check done that this is called
-from the driver which actually registered itself to the core, so
-please only call this function when you are sure the arch driver got
-registered correctly before.
-
-cpufreq_notify_transition()
----------------------------
-On "dumb" hardware where only fixed frequency can be set, the driver
-must call cpufreq_notify_transition() once before, and once after the
-actual transition.
-
-struct cpufreq_driver
----------------------
-On initialization, the arch driver is supposed to pass a pointer
-to a struct cpufreq_driver *cpufreq_driver consisting of the following
-entries:
-
-cpufreq_verify_t verify: This is a pointer to a function with the
- following definition:
- int verify_function (struct cpufreq_policy *policy).
- This function must verify the new policy is within the limits
- supported by the CPU, and at least one supported CPU is within
- this range. It may be useful to use cpufreq.h /
- cpufreq_verify_within_limits for this. If this is called with
- CPUFREQ_ALL_CPUS, and there is no common subset of frequencies
- for all CPUs, exit with an error.
-
-cpufreq_setpolicy_t setpolicy: This is a pointer to a function with
- the following definition:
- int setpolicy_function (struct cpufreq_policy *policy).
- This function must set the CPU to the new policy. If it is a
- "dumb" CPU which only allows fixed frequencies to be set, it
- shall set it to the lowest within the limit for
- CPUFREQ_POLICY_POWERSAVE, and to the highest for
- CPUFREQ_POLICY_PERFORMANCE. Once CONFIG_CPU_FREQ_DYNAMIC is
- implemented, it can use a dynamic method to adjust the speed
- between the lower and upper limit.
-
-struct cpufreq_policy *policy: This is an array of NR_CPUS struct
- cpufreq_policies, containing the current policies set for these
- CPUs. Note that policy[cpu].max_cpu_freq must contain the
- absolute maximum CPU frequency supported by the specified cpu.
-
-In case the driver is expected to run with the 2.4.-style API
-(/proc/sys/cpu/.../), two more values must be passed
-#ifdef CONFIG_CPU_FREQ_24_API
- unsigned int cpu_min_freq[NR_CPUS];
- unsigned int cpu_cur_freq[NR_CPUS];
-#endif
- with cpu_min_freq[cpu] being the minimum CPU frequency
- supported by the CPU; and the entries in cpu_cur_freq
- reflecting the current speed of the appropriate CPU.
-
-Some Requirements to CPUFreq architecture drivers
--------------------------------------------------
-* Only call cpufreq_register() when the ability to switch CPU
- frequencies is _verified_ or can't be missing. Also, all
- other initialization must be done beofre this call, as
- cpfureq_register calls the driver's verify and setpolicy code for
- each CPU.
-* cpufreq_unregister() may only be called if cpufreq_register() has
- been successfully(!) called before.
-* kfree() the struct cpufreq_driver only after the call to
- cpufreq_unregister(), unless cpufreq_register() failed.
-
-
-
-4. Mailing list and Links
-*************************
-
-
-Mailing List
-------------
-There is a CPU frequency changing CVS commit and general list where
-you can report bugs, problems or submit patches. To post a message,
-send an email to cpufreq@www.linux.org.uk, to subscribe go to
-http://www.linux.org.uk/mailman/listinfo/cpufreq. Previous post to the
-mailing list are available to subscribers at
-http://www.linux.org.uk/mailman/private/cpufreq/.
-
-
-Links
------
-the FTP archives:
-* ftp://ftp.linux.org.uk/pub/linux/cpufreq/
-
-how to access the CVS repository:
-* http://cvs.arm.linux.org.uk/
-
-the CPUFreq Mailing list:
-* http://www.linux.org.uk/mailman/listinfo/cpufreq
-
-Clock and voltage scaling for the SA-1100:
-* http://www.lart.tudelft.nl/projects/scaling
-
-CPUFreq project homepage
-* http://www.brodo.de/cpufreq/
diff -Nru a/Documentation/fb/sstfb.txt b/Documentation/fb/sstfb.txt
--- a/Documentation/fb/sstfb.txt Tue Mar 4 19:30:05 2003
+++ b/Documentation/fb/sstfb.txt Tue Mar 4 19:30:05 2003
@@ -138,7 +138,7 @@
- The driver is not your_favorite_toy-safe. this includes SMP...
[Actually from inspection it seems to be safe - Alan]
- when using XFree86 FBdev (X over fbdev) you may see strange color
- patterns at the border of your windows (the pixels loose the lowest
+ patterns at the border of your windows (the pixels lose the lowest
byte -> basicaly the blue component nd some of the green) . I'm unable
to reproduce this with XFree86-3.3, but one of the testers has this
problem with XFree86-4. apparently recent Xfree86-4.x solve this
diff -Nru a/Documentation/filesystems/hpfs.txt b/Documentation/filesystems/hpfs.txt
--- a/Documentation/filesystems/hpfs.txt Tue Mar 4 19:30:07 2003
+++ b/Documentation/filesystems/hpfs.txt Tue Mar 4 19:30:07 2003
@@ -109,7 +109,7 @@
Once I booted English OS/2 working in cp 850 and I created a file on my 852
partition. It marked file name codepage as 850 - good. But when I again booted
Czech OS/2, the file was completely inaccessible under any name. It seems that
-OS/2 uppercases the search pattern with it's system code page (852) and file
+OS/2 uppercases the search pattern with its system code page (852) and file
name it's comparing to with its code page (850). These could never match. Is it
really what IBM developers wanted? But problems continued. When I created in
Czech OS/2 another file in that directory, that file was inaccessible too. OS/2
diff -Nru a/Documentation/filesystems/vfs.txt b/Documentation/filesystems/vfs.txt
--- a/Documentation/filesystems/vfs.txt Tue Mar 4 19:30:11 2003
+++ b/Documentation/filesystems/vfs.txt Tue Mar 4 19:30:11 2003
@@ -439,7 +439,7 @@
d_release: called when a dentry is really deallocated
- d_iput: called when a dentry looses its inode (just prior to its
+ d_iput: called when a dentry loses its inode (just prior to its
being deallocated). The default when this is NULL is that the
VFS calls iput(). If you define this method, you must call
iput() yourself
diff -Nru a/Documentation/input/joystick-api.txt b/Documentation/input/joystick-api.txt
--- a/Documentation/input/joystick-api.txt Tue Mar 4 19:30:08 2003
+++ b/Documentation/input/joystick-api.txt Tue Mar 4 19:30:08 2003
@@ -168,7 +168,7 @@
and too many events to store in the queue get generated. Note that
high system load may contribute to space those reads even more.
-If time between reads is enough to fill the queue and loose an event,
+If time between reads is enough to fill the queue and lose an event,
the driver will switch to startup mode and next time you read it,
synthetic events (JS_EVENT_INIT) will be generated to inform you of
the actual state of the joystick.
diff -Nru a/Documentation/ioctl-number.txt b/Documentation/ioctl-number.txt
--- a/Documentation/ioctl-number.txt Tue Mar 4 19:30:13 2003
+++ b/Documentation/ioctl-number.txt Tue Mar 4 19:30:13 2003
@@ -72,6 +72,7 @@
linux/blkpg.h
0x20 all drivers/cdrom/cm206.h
0x22 all scsi/sg.h
+'#' 00-3F IEEE 1394 Subsystem Block for the entire subsystem
'1' 00-1F PPS kit from Ulrich Windl
'6' 00-10 Intel IA32 microcode update driver
diff -Nru a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
--- a/Documentation/kernel-parameters.txt Tue Mar 4 19:30:14 2003
+++ b/Documentation/kernel-parameters.txt Tue Mar 4 19:30:14 2003
@@ -516,6 +516,14 @@
[KNL,BOOT] Force usage of a specific region of memory
Region of memory to be used, from ss to ss+nn.
+ mem=nn[KMG]#ss[KMG]
+ [KNL,BOOT,ACPI] Mark specific memory as ACPI data.
+ Region of memory to be used, from ss to ss+nn.
+
+ mem=nn[KMG]$ss[KMG]
+ [KNL,BOOT,ACPI] Mark specific memory as reserved.
+ Region of memory to be used, from ss to ss+nn.
+
mem=nopentium [BUGS=IA-32] Disable usage of 4MB pages for kernel
memory.
diff -Nru a/Documentation/networking/decnet.txt b/Documentation/networking/decnet.txt
--- a/Documentation/networking/decnet.txt Tue Mar 4 19:30:04 2003
+++ b/Documentation/networking/decnet.txt Tue Mar 4 19:30:04 2003
@@ -42,7 +42,7 @@
3) Command line options
You can set a DECnet address on the kernel command line for compatibility
-with the 2.4 configuration procedure, but in general its not needed any more.
+with the 2.4 configuration procedure, but in general it's not needed any more.
If you do st a DECnet address on the command line, it has only one purpose
which is that its added to the addresses on the loopback device.
diff -Nru a/Documentation/networking/ifenslave.c b/Documentation/networking/ifenslave.c
--- a/Documentation/networking/ifenslave.c Tue Mar 4 19:30:05 2003
+++ b/Documentation/networking/ifenslave.c Tue Mar 4 19:30:05 2003
@@ -299,7 +299,7 @@
else { /* attach a slave interface to the master */
/* two possibilities :
- if hwaddr_notset, do nothing. The bond will assign the
- hwaddr from it's first slave.
+ hwaddr from its first slave.
- if !hwaddr_notset, assign the master's hwaddr to each slave
*/
diff -Nru a/Documentation/networking/netdevices.txt b/Documentation/networking/netdevices.txt
--- a/Documentation/networking/netdevices.txt Tue Mar 4 19:30:05 2003
+++ b/Documentation/networking/netdevices.txt Tue Mar 4 19:30:05 2003
@@ -18,7 +18,8 @@
dev->stop:
Synchronization: rtnl_lock() semaphore.
Context: process
- Notes: netif_running() is guaranteed false when this is called
+ Note1: netif_running() is guaranteed false
+ Note2: dev->poll() is guaranteed to be stopped
dev->do_ioctl:
Synchronization: rtnl_lock() semaphore.
@@ -31,10 +32,12 @@
dev->hard_start_xmit:
Synchronization: dev->xmit_lock spinlock.
Context: BHs disabled
+ Notes: netif_queue_stopped() is guaranteed false
dev->tx_timeout:
Synchronization: dev->xmit_lock spinlock.
Context: BHs disabled
+ Notes: netif_queue_stopped() is guaranteed true
dev->set_multicast_list:
Synchronization: dev->xmit_lock spinlock.
diff -Nru a/Documentation/networking/sk98lin.txt b/Documentation/networking/sk98lin.txt
--- a/Documentation/networking/sk98lin.txt Tue Mar 4 19:30:11 2003
+++ b/Documentation/networking/sk98lin.txt Tue Mar 4 19:30:11 2003
@@ -187,7 +187,7 @@
this port is not "Sense". If autonegotiation is "On", all
three values are possible. If it is "Off", only "Full" and
"Half" are allowed.
- It is usefull if your link partner does not support all
+ It is useful if your link partner does not support all
possible combinations.
- Flow Control
diff -Nru a/Documentation/s390/TAPE b/Documentation/s390/TAPE
--- a/Documentation/s390/TAPE Tue Mar 4 19:30:13 2003
+++ b/Documentation/s390/TAPE Tue Mar 4 19:30:13 2003
@@ -91,7 +91,7 @@
TODO List
- - Driver has to be stabelized still
+ - Driver has to be stabilized still
BUGS
diff -Nru a/Documentation/s390/s390dbf.txt b/Documentation/s390/s390dbf.txt
--- a/Documentation/s390/s390dbf.txt Tue Mar 4 19:30:11 2003
+++ b/Documentation/s390/s390dbf.txt Tue Mar 4 19:30:11 2003
@@ -14,7 +14,7 @@
If the system still runs but only a subcomponent which uses dbf failes,
it is possible to look at the debug logs on a live system via the Linux proc
filesystem.
-The debug feature may also very usefull for kernel and driver development.
+The debug feature may also very useful for kernel and driver development.
Design:
-------
diff -Nru a/Documentation/scsi/ibmmca.txt b/Documentation/scsi/ibmmca.txt
--- a/Documentation/scsi/ibmmca.txt Tue Mar 4 19:30:11 2003
+++ b/Documentation/scsi/ibmmca.txt Tue Mar 4 19:30:11 2003
@@ -254,7 +254,7 @@
device to be existant, but it has no ldn assigned, it gets a ldn out of 7
to 14. The numbers are assigned in cyclic order. Therefore it takes 8
dynamical reassignments on the SCSI-devices, until a certain device
- looses its ldn again. This assures, that dynamical remapping is avoided
+ loses its ldn again. This assures, that dynamical remapping is avoided
during intense I/O between up to 15 SCSI-devices (means pun,lun
combinations). A further advantage of this method is, that people who
build their kernel without probing on all luns will get what they expect,
diff -Nru a/Documentation/sound/oss/Wavefront b/Documentation/sound/oss/Wavefront
--- a/Documentation/sound/oss/Wavefront Tue Mar 4 19:30:04 2003
+++ b/Documentation/sound/oss/Wavefront Tue Mar 4 19:30:04 2003
@@ -81,7 +81,7 @@
2) Why does line XXX of the code look like this .... ?
**********************************************************************
-Either because its not finished yet, or because you're a better coder
+Either because it's not finished yet, or because you're a better coder
than I am, or because you don't understand some aspect of how the card
or the code works.
diff -Nru a/Documentation/sx.txt b/Documentation/sx.txt
--- a/Documentation/sx.txt Tue Mar 4 19:30:08 2003
+++ b/Documentation/sx.txt Tue Mar 4 19:30:08 2003
@@ -265,7 +265,7 @@
-- Done (Ugly: not the way I want it. Copied from serial.c).
- write buffer isn't flushed at close.
- -- Done. I still seem to loose a few chars at close.
+ -- Done. I still seem to lose a few chars at close.
Sorry. I think that this is a firmware issue. (-> Specialix)
- drain hardware before changing termios
diff -Nru a/Documentation/video4linux/bttv/Sound-FAQ b/Documentation/video4linux/bttv/Sound-FAQ
--- a/Documentation/video4linux/bttv/Sound-FAQ Tue Mar 4 19:30:14 2003
+++ b/Documentation/video4linux/bttv/Sound-FAQ Tue Mar 4 19:30:14 2003
@@ -120,7 +120,7 @@
video_inputs - # of video inputs the card has
audio_inputs - historical cruft, not used any more.
tuner - which input is the tuner
-svhs - which input is svhs (all others are labled composite)
+svhs - which input is svhs (all others are labeled composite)
muxsel - video mux, input->registervalue mapping
pll - same as pll= insmod option
tuner_type - same as tuner= insmod option
diff -Nru a/Documentation/vm/hugetlbpage.txt b/Documentation/vm/hugetlbpage.txt
--- a/Documentation/vm/hugetlbpage.txt Tue Mar 4 19:30:04 2003
+++ b/Documentation/vm/hugetlbpage.txt Tue Mar 4 19:30:04 2003
@@ -1,4 +1,3 @@
-2002 Rohit Seth
The intent of this file is to give a brief summary of hugetlbpage support in
the Linux kernel. This support is built on top of multiple page size support
@@ -11,75 +10,194 @@
now as bigger and bigger physical memories (several GBs) are more readily
available.
-The current support is provided in kernel using the following two system calls:
+Users can use the huge page support in Linux kernel by either using the mmap
+system call or standard SYSv shared memory system calls (shmget, shmat).
-1) sys_alloc_hugepages(int key, unsigned long addr, size_t len, int prot, int flag)
+First the Linux kernel needs to be built with CONFIG_HUGETLB_PAGE (present
+under Processor types and feature) and CONFIG_HUGETLBFS (present under file
+system option on config menu) config options.
-2) sys_free_hugepages(unsigned long addr)
+The kernel built with hugepage support should show the number of configured
+hugepages in the system by running the "cat /proc/meminfo" command.
-Arguments to these system calls are defined as follows:
-
-key: If a user application wants to share hugepages with other
- processes then this input argument needs to be greater than 0.
- Different applications can use the same key to map the same physical
- memory (mapped by hugeTLBs) in their address space. When a process
- forks, then children share the same physical memory with their parent.
-
- For the cases when an application wishes to keep the huge
- pages private, the key value of 0 is defined. In this case
- kernel allocates hugetlb pages to the process that are not
- shareable across different processes. These segments are marked
- private for the process. These segments are not copied to
- children's address space on forks - the child will have no
- mapping for these virtual addresses.
-
- The key manangement (and assignment) part is left to user
- applications.
-
-addr: This is an address hint. The kernel will perform a sanity check
- on this address (alignment etc.) before using it. It is possible that
- kernel will allocates a different address (on success).
-
-len: Length of the required segment. Applications are expected to give
- HPAGE_SIZE aligned length. (Else EINVAL is returned.)
-
-prot: The prot parameter specifies the desired memory protection on the
- requested hugepages. The possible values are PROT_EXEC, PROT_READ,
- PROT_WRITE.
-
-flag: This parameter can only take the value IPC_CREAT for the cases
- when "key" value greater than zero (shared hugepage cases). It is
- ignored for values of "key" that are <= 0.
-
- This parameter indicates that the kernel should create a new huge
- page segment (corresponding to "key"), if none already exists. If this
- flag is not set, then sys_allochugepages() will return ENOENT if there
- is no segment associated with corresponding "key".
-
-In case of success, sys_alloc_hugepages() return the allocated virtual address.
-
-sys_free_hugepages() frees the hugetlb resources from the calling process's
-address space. The input argument "addr" specifies the segment that needs to
-be freed. It is important to note that for the shared hugepage cases, the
-underlying hugepages are freed onlyafter all the users of those pages have
-either freed those hugepages or have exited.
-
-/proc/sys/vm_nr_hugepages indicates the current number of configured hugetlb
-pages in the kernel. Super user privileges are required for modification of
-this value. The allocation of hugetlb pages is possible only if there are
-enough physically contiguous free pages in system OR if there are enough
-hugetlb pages free that can be transfered back to regular memory pool.
-
-/proc/meminfo also gives the information about the total number of hugetlb
+/proc/meminfo also provides information about the total number of hugetlb
pages configured in the kernel. It also displays information about the
number of free hugetlb pages at any time. It also displays information about
-the configured hugepage size - this is needed for generting the proper
+the configured hugepage size - this is needed for generating the proper
alignment and size of the arguments to the above system calls.
-Pages that are used as hugetlb pages are marked reserved inside the kernel.
-This allows hugetlb pages to be always locked in memory. The user either
-needs to be super user to use these pages or one of supplementary group
-should include root. In future there will be support to check RLIMIT_MLOCK
-for limited (number of hugetlb pages) usage to unprivileged applications.
+The output of "cat /proc/meminfo" will have output like:
-If the kernel does not support hugepages these system calls will return ENOSYS.
+.....
+HugePages_Total: xxx
+HugePages_Free: yyy
+Hugepagesize: zzz KB
+
+/proc/filesystems should also show a filesystem of type "hugetlbfs" configured
+in the kernel.
+
+/proc/sys/vm/nr_hugepages indicates the current number of configured hugetlb
+pages in the kernel. Super user can dynamically request more (or free some
+pre-configured) hugepages.
+The allocation( or deallocation) of hugetlb pages is posible only if there are
+enough physically contiguous free pages in system (freeing of hugepages is
+possible only if there are enough hugetlb pages free that can be transfered
+back to regular memory pool).
+
+Pages that are used as hugetlb pages are reserved inside the kernel and can
+not be used for other purposes.
+
+Once the kernel with Hugetlb page support is built and running, a user can
+use either the mmap system call or shared memory system calls to start using
+the huge pages. It is required that the system administrator preallocate
+enough memory for huge page purposes.
+
+Use the following command to dynamically allocate/deallocate hugepages:
+
+ echo 20 > /proc/sys/vm/nr_hugepages
+
+This command will try to configure 20 hugepages in the system. The success
+or failure of allocation depends on the amount of physically contiguous
+memory that is preset in system at this time. System administrators may want
+to put this command in one of the local rc init file. This will enable the
+kernel to request huge pages early in the boot process (when the possibility
+of getting physical contiguous pages is still very high).
+
+If the user applications are going to request hugepages using mmap system
+call, then it is required that system administrator mount a file system of
+type hugetlbfs:
+
+ mount none /mnt/huge -t hugetlbfs
+
+This commands mounts a (psuedo) filesystem of type hugetlbfs on the directory
+/mnt/huge. Any files created on /mnt/huge uses hugepages. An example is
+given at the end of this document.
+
+read and write system calls are not supported on files that reside on hugetlb
+file systems.
+
+Also, it is important to note that no such mount command is required if the
+applications are going to use only shmat/shmget system calls. It is possible
+for same or different applications to use any combination of mmaps and shm*
+calls. Though the mount of filesystem will be required for using mmaps.
+
+/* Example of using hugepage in user application using Sys V shared memory
+ * system calls. In this example, app is requesting memory of size 256MB that
+ * is backed by huge pages. Application uses the flag SHM_HUGETLB in shmget
+ * system call to informt the kernel that it is requesting hugepages. For
+ * IA-64 architecture, Linux kernel reserves Region number 4 for hugepages.
+ * That means the addresses starting with 0x800000....will need to be
+ * specified.
+ */
+#include
+#include
+#include
+#include
+
+extern int errno;
+#define SHM_HUGETLB 04000
+#define LPAGE_SIZE (256UL*1024UL*1024UL)
+#define dprintf(x) printf(x)
+#define ADDR (0x8000000000000000UL)
+main()
+{
+ int shmid;
+ int i, j, k;
+ volatile char *shmaddr;
+
+ if ((shmid =shmget(2, LPAGE_SIZE, SHM_HUGETLB|IPC_CREAT|SHM_R|SHM_W ))
+< 0) {
+ perror("Failure:");
+ exit(1);
+ }
+ printf("shmid: 0x%x\n", shmid);
+ shmaddr = shmat(shmid, (void *)ADDR, SHM_RND) ;
+ if (errno != 0) {
+ perror("Shared Memory Attach Failure:");
+ exit(2);
+ }
+ printf("shmaddr: %p\n", shmaddr);
+
+ dprintf("Starting the writes:\n");
+ for (i=0;i
+#include
+#include
+#include
+
+#define FILE_NAME "/mnt/hugepagefile"
+#define LENGTH (256*1024*1024)
+#define PROTECTION (PROT_READ | PROT_WRITE)
+#define FLAGS MAP_SHARED |MAP_FIXED
+#define ADDRESS (char *)(0x60000000UL + 0x8000000000000000UL)
+
+extern errno;
+
+check_bytes(char *addr)
+{
+ printf("First hex is %x\n", *((unsigned int *)addr));
+}
+
+write_bytes(char *addr)
+{
+ int i;
+ for (i=0;i .tmp_version
- mv -f .tmp_version .version
- $(Q)$(MAKE) $(build)=init
+ set -e; \
+ $(if $(filter .tmp_kallsyms%,$^),, \
+ echo ' GEN .version'; \
+ . $(srctree)/scripts/mkversion > .tmp_version; \
+ mv -f .tmp_version .version; \
+ $(MAKE) $(build)=init; \
)
- $(call cmd,vmlinux__)
+ $(call cmd,vmlinux__); \
echo 'cmd_$@ := $(cmd_vmlinux__)' > $(@D)/.$(@F).cmd
endef
-define rule_vmlinux_no_percpu
- $(rule_vmlinux__)
- $(NM) $@ | grep -v '\(compiled\)\|\(\.o$$\)\|\( [aUw] \)\|\(\.\.ng$$\)\|\(LASH[RL]DI\)' | sort > System.map
-endef
-
ifdef CONFIG_SMP
+# Final awk script makes sure per-cpu vars are in per-cpu section, as
+# old gcc (eg egcs 2.92.11) ignores section attribute if uninitialized.
+
+check_per_cpu = $(AWK) -f $(srctree)/scripts/per-cpu-check.awk < System.map
+endif
+
define rule_vmlinux
- $(rule_vmlinux_no_percpu)
- $(AWK) -f $(srctree)/scripts/per-cpu-check.awk < System.map
-endef
-else
-define rule_vmlinux
- $(rule_vmlinux_no_percpu)
+ $(rule_vmlinux__)
+ $(NM) $@ | grep -v '\(compiled\)\|\(\.o$$\)\|\( [aUw] \)\|\(\.\.ng$$\)\|\(LASH[RL]DI\)' | sort > System.map
+ $(check_per_cpu)
endef
-endif
LDFLAGS_vmlinux += -T arch/$(ARCH)/vmlinux.lds.s
@@ -377,7 +373,7 @@
$(call cmd,kallsyms)
.tmp_vmlinux1: $(vmlinux-objs) arch/$(ARCH)/vmlinux.lds.s FORCE
- $(call if_changed_rule,vmlinux__)
+ +$(call if_changed_rule,vmlinux__)
.tmp_vmlinux2: $(vmlinux-objs) .tmp_kallsyms1.o arch/$(ARCH)/vmlinux.lds.s FORCE
$(call if_changed_rule,vmlinux__)
@@ -457,14 +453,14 @@
# Split autoconf.h into include/linux/config/*
include/config/MARKER: scripts/split-include include/linux/autoconf.h
- @echo ' SPLIT include/linux/autoconf.h -> include/config/*'
+ @echo ' SPLIT include/linux/autoconf.h -> include/config/*'
@scripts/split-include include/linux/autoconf.h include/config
@touch $@
# if .config is newer than include/linux/autoconf.h, someone tinkered
# with it and forgot to run make oldconfig
-include/linux/autoconf.h: .config
+include/linux/autoconf.h: .config scripts/fixdep
$(Q)$(MAKE) $(build)=scripts/kconfig scripts/kconfig/conf
./scripts/kconfig/conf -s arch/$(ARCH)/Kconfig
@@ -481,7 +477,7 @@
echo '"$(KERNELRELEASE)" exceeds $(uts_len) characters' >&2; \
exit 1; \
fi;
- @echo -n ' Generating $@'
+ @echo -n ' GEN $@'
@(echo \#define UTS_RELEASE \"$(KERNELRELEASE)\"; \
echo \#define LINUX_VERSION_CODE `expr $(VERSION) \\* 65536 + $(PATCHLEVEL) \\* 256 + $(SUBLEVEL)`; \
echo '#define KERNEL_VERSION(a,b,c) (((a) << 16) + ((b) << 8) + (c))'; \
@@ -506,7 +502,7 @@
# Build modules
.PHONY: modules
-modules: $(SUBDIRS) $(if $(CONFIG_MODVERSIONS),vmlinux)
+modules: $(SUBDIRS) $(if $(KBUILD_BUILTIN),vmlinux)
@echo ' Building modules, stage 2.';
$(Q)$(MAKE) -rR -f scripts/Makefile.modpost
@@ -571,33 +567,6 @@
echo "#endif" )
endef
-# RPM target
-# ---------------------------------------------------------------------------
-
-# If you do a make spec before packing the tarball you can rpm -ta it
-
-spec:
- . scripts/mkspec >kernel.spec
-
-# Build a tar ball, generate an rpm from it and pack the result
-# There arw two bits of magic here
-# 1) The use of /. to avoid tar packing just the symlink
-# 2) Removing the .dep files as they have source paths in them that
-# will become invalid
-
-rpm: clean spec
- find . $(RCS_FIND_IGNORE) \
- \( -size 0 -o -name .depend -o -name .hdepend\) \
- -type f -print | xargs rm -f
- set -e; \
- cd $(TOPDIR)/.. ; \
- ln -sf $(TOPDIR) $(KERNELPATH) ; \
- tar -cvz $(RCS_TAR_IGNORE) -f $(KERNELPATH).tar.gz $(KERNELPATH)/. ; \
- rm $(KERNELPATH) ; \
- cd $(TOPDIR) ; \
- $(CONFIG_SHELL) $(srctree)/scripts/mkversion > .version ; \
- rpm -ta $(TOPDIR)/../$(KERNELPATH).tar.gz ; \
- rm $(TOPDIR)/../$(KERNELPATH).tar.gz
else # ifdef include_config
@@ -630,7 +599,7 @@
# ---------------------------------------------------------------------------
.PHONY: oldconfig xconfig menuconfig config \
- make_with_config
+ make_with_config rpm
scripts/kconfig/conf scripts/kconfig/mconf scripts/kconfig/qconf: scripts/fixdep FORCE
$(Q)$(MAKE) $(build)=scripts/kconfig $@
@@ -763,6 +732,36 @@
tags: FORCE
$(call cmd,tags)
+# RPM target
+# ---------------------------------------------------------------------------
+
+# If you do a make spec before packing the tarball you can rpm -ta it
+
+spec:
+ . $(srctree)/scripts/mkspec >kernel.spec
+
+# Build a tar ball, generate an rpm from it and pack the result
+# There are two bits of magic here
+# 1) The use of /. to avoid tar packing just the symlink
+# 2) Removing the .dep files as they have source paths in them that
+# will become invalid
+
+rpm: clean spec
+ find . $(RCS_FIND_IGNORE) \
+ \( -size 0 -o -name .depend -o -name .hdepend \) \
+ -type f -print | xargs rm -f
+ set -e; \
+ cd $(TOPDIR)/.. ; \
+ ln -sf $(TOPDIR) $(KERNELPATH) ; \
+ tar -cvz $(RCS_TAR_IGNORE) -f $(KERNELPATH).tar.gz $(KERNELPATH)/. ; \
+ rm $(KERNELPATH) ; \
+ cd $(TOPDIR) ; \
+ $(CONFIG_SHELL) $(srctree)/scripts/mkversion > .version ; \
+ RPM=`which rpmbuild`; \
+ if [ -z "$$RPM" ]; then RPM=rpm; fi; \
+ $$RPM -ta $(TOPDIR)/../$(KERNELPATH).tar.gz ; \
+ rm $(TOPDIR)/../$(KERNELPATH).tar.gz
+
# Brief documentation of the typical targets used
# ---------------------------------------------------------------------------
@@ -782,7 +781,6 @@
@echo ''
@echo 'Other generic targets:'
@echo ' all - Build all targets marked with [*]'
- @echo ' dep - Create module version information'
@echo '* vmlinux - Build the bare kernel'
@echo '* modules - Build all modules'
@echo ' dir/file.[ois]- Build specified target only'
@@ -802,7 +800,7 @@
# Documentation targets
# ---------------------------------------------------------------------------
-sgmldocs psdocs pdfdocs htmldocs: scripts
+sgmldocs psdocs pdfdocs htmldocs: scripts/docproc FORCE
$(Q)$(MAKE) $(build)=Documentation/DocBook $@
# Scripts to check various things for consistency
@@ -812,11 +810,6 @@
find * $(RCS_FIND_IGNORE) \
-name '*.[hcS]' -type f -print | sort \
| xargs $(PERL) -w scripts/checkconfig.pl
-
-checkhelp:
- find * $(RCS_FIND_IGNORE) \
- -name [cC]onfig.in -print | sort \
- | xargs $(PERL) -w scripts/checkhelp.pl
checkincludes:
find * $(RCS_FIND_IGNORE) \
diff -Nru a/arch/alpha/boot/tools/objstrip.c b/arch/alpha/boot/tools/objstrip.c
--- a/arch/alpha/boot/tools/objstrip.c Tue Mar 4 19:30:14 2003
+++ b/arch/alpha/boot/tools/objstrip.c Tue Mar 4 19:30:14 2003
@@ -7,7 +7,7 @@
*/
/*
* Converts an ECOFF or ELF object file into a bootable file. The
- * object file must be a OMAGIC file (i.e., data and bss follow immediatly
+ * object file must be a OMAGIC file (i.e., data and bss follow immediately
* behind the text). See DEC "Assembly Language Programmer's Guide"
* documentation for details. The SRM boot process is documented in
* the Alpha AXP Architecture Reference Manual, Second Edition by
diff -Nru a/arch/alpha/kernel/entry.S b/arch/alpha/kernel/entry.S
--- a/arch/alpha/kernel/entry.S Tue Mar 4 19:30:03 2003
+++ b/arch/alpha/kernel/entry.S Tue Mar 4 19:30:03 2003
@@ -591,7 +591,6 @@
*/
.globl ret_from_fork
-#if CONFIG_SMP || CONFIG_PREEMPT
.align 4
.ent ret_from_fork
ret_from_fork:
@@ -599,9 +598,6 @@
mov $17, $16
jmp $31, schedule_tail
.end ret_from_fork
-#else
-ret_from_fork = ret_from_sys_call
-#endif
/*
* kernel_thread(fn, arg, clone_flags)
diff -Nru a/arch/alpha/kernel/pci-noop.c b/arch/alpha/kernel/pci-noop.c
--- a/arch/alpha/kernel/pci-noop.c Tue Mar 4 19:30:08 2003
+++ b/arch/alpha/kernel/pci-noop.c Tue Mar 4 19:30:08 2003
@@ -48,7 +48,6 @@
sys_pciconfig_iobase(long which, unsigned long bus, unsigned long dfn)
{
struct pci_controller *hose;
- struct pci_dev *dev;
/* from hose or from bus.devfn */
if (which & IOBASE_FROM_HOSE) {
@@ -106,6 +105,7 @@
void *
pci_alloc_consistent(struct pci_dev *pdev, size_t size, dma_addr_t *dma_addrp)
{
+ return NULL;
}
void
pci_free_consistent(struct pci_dev *pdev, size_t size, void *cpu_addr,
@@ -116,6 +116,7 @@
pci_map_single(struct pci_dev *pdev, void *cpu_addr, size_t size,
int direction)
{
+ return (dma_addr_t) 0;
}
void
pci_unmap_single(struct pci_dev *pdev, dma_addr_t dma_addr, size_t size,
diff -Nru a/arch/alpha/kernel/pci.c b/arch/alpha/kernel/pci.c
--- a/arch/alpha/kernel/pci.c Tue Mar 4 19:30:04 2003
+++ b/arch/alpha/kernel/pci.c Tue Mar 4 19:30:04 2003
@@ -63,17 +63,6 @@
}
static void __init
-quirk_ali_ide_ports(struct pci_dev *dev)
-{
- if (dev->resource[0].end == 0xffff)
- dev->resource[0].end = dev->resource[0].start + 7;
- if (dev->resource[2].end == 0xffff)
- dev->resource[2].end = dev->resource[2].start + 7;
- if (dev->resource[3].end == 0xffff)
- dev->resource[3].end = dev->resource[3].start + 7;
-}
-
-static void __init
quirk_cypress(struct pci_dev *dev)
{
/* The Notorious Cy82C693 chip. */
@@ -121,8 +110,6 @@
struct pci_fixup pcibios_fixups[] __initdata = {
{ PCI_FIXUP_HEADER, PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82378,
quirk_isa_bridge },
- { PCI_FIXUP_HEADER, PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M5229,
- quirk_ali_ide_ports },
{ PCI_FIXUP_HEADER, PCI_VENDOR_ID_CONTAQ, PCI_DEVICE_ID_CONTAQ_82C693,
quirk_cypress },
{ PCI_FIXUP_FINAL, PCI_ANY_ID, PCI_ANY_ID,
diff -Nru a/arch/alpha/kernel/pci_iommu.c b/arch/alpha/kernel/pci_iommu.c
--- a/arch/alpha/kernel/pci_iommu.c Tue Mar 4 19:30:14 2003
+++ b/arch/alpha/kernel/pci_iommu.c Tue Mar 4 19:30:14 2003
@@ -318,7 +318,7 @@
/* Unmap a single streaming mode DMA translation. The DMA_ADDR and
SIZE must match what was provided for in a previous pci_map_single
call. All other usages are undefined. After this call, reads by
- the cpu to the buffer are guarenteed to see whatever the device
+ the cpu to the buffer are guaranteed to see whatever the device
wrote there. */
void
diff -Nru a/arch/alpha/kernel/process.c b/arch/alpha/kernel/process.c
--- a/arch/alpha/kernel/process.c Tue Mar 4 19:30:13 2003
+++ b/arch/alpha/kernel/process.c Tue Mar 4 19:30:13 2003
@@ -155,10 +155,7 @@
struct halt_info args;
args.mode = mode;
args.restart_cmd = restart_cmd;
-#ifdef CONFIG_SMP
- smp_call_function(common_shutdown_1, &args, 1, 0);
-#endif
- common_shutdown_1(&args);
+ on_each_cpu(common_shutdown_1, &args, 1, 0);
}
void
diff -Nru a/arch/alpha/kernel/smp.c b/arch/alpha/kernel/smp.c
--- a/arch/alpha/kernel/smp.c Tue Mar 4 19:30:05 2003
+++ b/arch/alpha/kernel/smp.c Tue Mar 4 19:30:05 2003
@@ -899,10 +899,8 @@
smp_imb(void)
{
/* Must wait other processors to flush their icache before continue. */
- if (smp_call_function(ipi_imb, NULL, 1, 1))
+ if (on_each_cpu(ipi_imb, NULL, 1, 1))
printk(KERN_CRIT "smp_imb: timed out\n");
-
- imb();
}
static void
@@ -916,11 +914,9 @@
{
/* Although we don't have any data to pass, we do want to
synchronize with the other processors. */
- if (smp_call_function(ipi_flush_tlb_all, NULL, 1, 1)) {
+ if (on_each_cpu(ipi_flush_tlb_all, NULL, 1, 1)) {
printk(KERN_CRIT "flush_tlb_all: timed out\n");
}
-
- tbia();
}
#define asn_locked() (cpu_data[smp_processor_id()].asn_lock)
@@ -938,6 +934,8 @@
void
flush_tlb_mm(struct mm_struct *mm)
{
+ preempt_disable();
+
if (mm == current->active_mm) {
flush_tlb_current(mm);
if (atomic_read(&mm->mm_users) <= 1) {
@@ -948,6 +946,7 @@
if (mm->context[cpu])
mm->context[cpu] = 0;
}
+ preempt_enable();
return;
}
}
@@ -955,6 +954,8 @@
if (smp_call_function(ipi_flush_tlb_mm, mm, 1, 1)) {
printk(KERN_CRIT "flush_tlb_mm: timed out\n");
}
+
+ preempt_enable();
}
struct flush_tlb_page_struct {
@@ -981,6 +982,8 @@
struct flush_tlb_page_struct data;
struct mm_struct *mm = vma->vm_mm;
+ preempt_disable();
+
if (mm == current->active_mm) {
flush_tlb_current_page(mm, vma, addr);
if (atomic_read(&mm->mm_users) <= 1) {
@@ -991,6 +994,7 @@
if (mm->context[cpu])
mm->context[cpu] = 0;
}
+ preempt_enable();
return;
}
}
@@ -1002,6 +1006,8 @@
if (smp_call_function(ipi_flush_tlb_page, &data, 1, 1)) {
printk(KERN_CRIT "flush_tlb_page: timed out\n");
}
+
+ preempt_enable();
}
void
@@ -1030,6 +1036,8 @@
if ((vma->vm_flags & VM_EXEC) == 0)
return;
+ preempt_disable();
+
if (mm == current->active_mm) {
__load_new_mm_context(mm);
if (atomic_read(&mm->mm_users) <= 1) {
@@ -1040,6 +1048,7 @@
if (mm->context[cpu])
mm->context[cpu] = 0;
}
+ preempt_enable();
return;
}
}
@@ -1047,6 +1056,8 @@
if (smp_call_function(ipi_flush_icache_page, mm, 1, 1)) {
printk(KERN_CRIT "flush_icache_page: timed out\n");
}
+
+ preempt_enable();
}
#ifdef CONFIG_DEBUG_SPINLOCK
diff -Nru a/arch/alpha/kernel/sys_marvel.c b/arch/alpha/kernel/sys_marvel.c
--- a/arch/alpha/kernel/sys_marvel.c Tue Mar 4 19:30:12 2003
+++ b/arch/alpha/kernel/sys_marvel.c Tue Mar 4 19:30:12 2003
@@ -440,7 +440,7 @@
return;
/*
- * There is a local IO7 - redirect all of it's interrupts here.
+ * There is a local IO7 - redirect all of its interrupts here.
*/
printk("Redirecting IO7 interrupts to local CPU at PE %u\n", cpuid);
diff -Nru a/arch/alpha/kernel/time.c b/arch/alpha/kernel/time.c
--- a/arch/alpha/kernel/time.c Tue Mar 4 19:30:09 2003
+++ b/arch/alpha/kernel/time.c Tue Mar 4 19:30:09 2003
@@ -50,7 +50,7 @@
#include "proto.h"
#include "irq_impl.h"
-u64 jiffies_64;
+u64 jiffies_64 = INITIAL_JIFFIES;
extern unsigned long wall_jiffies; /* kernel/timer.c */
diff -Nru a/arch/alpha/kernel/traps.c b/arch/alpha/kernel/traps.c
--- a/arch/alpha/kernel/traps.c Tue Mar 4 19:30:13 2003
+++ b/arch/alpha/kernel/traps.c Tue Mar 4 19:30:13 2003
@@ -411,7 +411,7 @@
}
/* There is an ifdef in the PALcode in MILO that enables a
- "kernel debugging entry point" as an unpriviledged call_pal.
+ "kernel debugging entry point" as an unprivileged call_pal.
We don't want to have anything to do with it, but unfortunately
several versions of MILO included in distributions have it enabled,
diff -Nru a/arch/alpha/lib/checksum.c b/arch/alpha/lib/checksum.c
--- a/arch/alpha/lib/checksum.c Tue Mar 4 19:30:09 2003
+++ b/arch/alpha/lib/checksum.c Tue Mar 4 19:30:09 2003
@@ -63,7 +63,7 @@
((unsigned long) ntohs(len) << 16) +
((unsigned long) proto << 8));
- /* Fold down to 32-bits so we don't loose in the typedef-less
+ /* Fold down to 32-bits so we don't lose in the typedef-less
network stack. */
/* 64 to 33 */
result = (result & 0xffffffff) + (result >> 32);
diff -Nru a/arch/arm/common/sa1111.c b/arch/arm/common/sa1111.c
--- a/arch/arm/common/sa1111.c Tue Mar 4 19:30:13 2003
+++ b/arch/arm/common/sa1111.c Tue Mar 4 19:30:13 2003
@@ -418,6 +418,7 @@
spin_lock_irqsave(&sachip->lock, flags);
+#if CONFIG_ARCH_SA1100
/*
* First, set up the 3.6864MHz clock on GPIO 27 for the SA-1111:
* (SA-1110 Developer's Manual, section 9.1.2.1)
@@ -425,6 +426,11 @@
GAFR |= GPIO_32_768kHz;
GPDR |= GPIO_32_768kHz;
TUCR = TUCR_3_6864MHz;
+#elif CONFIG_ARCH_PXA
+ pxa_gpio_mode(GPIO11_3_6MHz_MD);
+#else
+#error missing clock setup
+#endif
/*
* Turn VCO on, and disable PLL Bypass.
@@ -461,6 +467,8 @@
spin_unlock_irqrestore(&sachip->lock, flags);
}
+#ifdef CONFIG_ARCH_SA1100
+
/*
* Configure the SA1111 shared memory controller.
*/
@@ -476,6 +484,8 @@
sa1111_writel(smcr, sachip->base + SA1111_SMCR);
}
+#endif
+
static void
sa1111_init_one_child(struct sa1111 *sachip, struct sa1111_dev *sadev, unsigned int offset)
{
@@ -569,6 +579,7 @@
*/
sa1111_wake(sachip);
+#ifdef CONFIG_ARCH_SA1100
/*
* The SDRAM configuration of the SA1110 and the SA1111 must
* match. This is very important to ensure that SA1111 accesses
@@ -592,6 +603,7 @@
* Enable the SA1110 memory bus request and grant signals.
*/
sa1110_mb_enable();
+#endif
/*
* The interrupt controller must be initialised before any
diff -Nru a/arch/arm/kernel/entry-armo.S b/arch/arm/kernel/entry-armo.S
--- a/arch/arm/kernel/entry-armo.S Tue Mar 4 19:30:05 2003
+++ b/arch/arm/kernel/entry-armo.S Tue Mar 4 19:30:05 2003
@@ -426,7 +426,7 @@
mov r2, #0
tst r4, #1 << 20 @ Check to see if it is a write instruction
orreq r2, r2, #FAULT_CODE_WRITE @ Indicate write instruction
- mov r1, r4, lsr #22 @ Now branch to the relevent processing routine
+ mov r1, r4, lsr #22 @ Now branch to the relevant processing routine
and r1, r1, #15 << 2
add pc, pc, r1
movs pc, lr
diff -Nru a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
--- a/arch/arm/kernel/entry-armv.S Tue Mar 4 19:30:05 2003
+++ b/arch/arm/kernel/entry-armv.S Tue Mar 4 19:30:05 2003
@@ -1026,7 +1026,7 @@
mrs lr, spsr
str lr, [r13, #4] @ save spsr_IRQ
@
- @ now branch to the relevent MODE handling routine
+ @ now branch to the relevant MODE handling routine
@
mov r13, #PSR_I_BIT | MODE_SVC
msr spsr_c, r13 @ switch to SVC_32 mode
@@ -1067,7 +1067,7 @@
mrs lr, spsr
str lr, [r13, #4]
@
- @ now branch to the relevent MODE handling routine
+ @ now branch to the relevant MODE handling routine
@
mov r13, #PSR_I_BIT | MODE_SVC
msr spsr_c, r13 @ switch to SVC_32 mode
@@ -1109,7 +1109,7 @@
mrs lr, spsr
str lr, [r13, #4] @ save spsr_ABT
@
- @ now branch to the relevent MODE handling routine
+ @ now branch to the relevant MODE handling routine
@
mov r13, #PSR_I_BIT | MODE_SVC
msr spsr_c, r13 @ switch to SVC_32 mode
@@ -1150,7 +1150,7 @@
mrs lr, spsr
str lr, [r13, #4] @ save spsr_UND
@
- @ now branch to the relevent MODE handling routine
+ @ now branch to the relevant MODE handling routine
@
mov r13, #PSR_I_BIT | MODE_SVC
msr spsr_c, r13 @ switch to SVC_32 mode
diff -Nru a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S
--- a/arch/arm/kernel/head.S Tue Mar 4 19:30:03 2003
+++ b/arch/arm/kernel/head.S Tue Mar 4 19:30:03 2003
@@ -37,7 +37,7 @@
.globl swapper_pg_dir
.equ swapper_pg_dir, TEXTADDR - 0x4000
- .macro pgtbl, reg, rambase
+ .macro pgtbl, reg
adr \reg, stext
sub \reg, \reg, #0x4000
.endm
@@ -47,7 +47,7 @@
* can convert the page table base address to the base address of the section
* containing both.
*/
- .macro krnladr, rd, pgtable, rambase
+ .macro krnladr, rd, pgtable
bic \rd, \pgtable, #0x000ff000
.endm
@@ -164,7 +164,7 @@
* r8 = page table flags
*/
__create_page_tables:
- pgtbl r4, r5 @ page table address
+ pgtbl r4 @ page table address
/*
* Clear the 16K level 1 swapper page table
@@ -184,7 +184,7 @@
* cater for the MMU enable. This identity mapping
* will be removed by paging_init()
*/
- krnladr r2, r4, r5 @ start of kernel
+ krnladr r2, r4 @ start of kernel
add r3, r8, r2 @ flags + kernel base
str r3, [r4, r2, lsr #18] @ identity mapping
diff -Nru a/arch/arm/kernel/ptrace.c b/arch/arm/kernel/ptrace.c
--- a/arch/arm/kernel/ptrace.c Tue Mar 4 19:30:04 2003
+++ b/arch/arm/kernel/ptrace.c Tue Mar 4 19:30:04 2003
@@ -435,7 +435,7 @@
* be receiving a prefetch abort shortly.
*
* If we don't set this breakpoint here, then we can
- * loose control of the thread during single stepping.
+ * lose control of the thread during single stepping.
*/
if (!alt || predicate(insn) != PREDICATE_ALWAYS)
add_breakpoint(child, dbg, pc + 4);
diff -Nru a/arch/arm/kernel/time.c b/arch/arm/kernel/time.c
--- a/arch/arm/kernel/time.c Tue Mar 4 19:30:11 2003
+++ b/arch/arm/kernel/time.c Tue Mar 4 19:30:11 2003
@@ -32,7 +32,7 @@
#include
#include
-u64 jiffies_64;
+u64 jiffies_64 = INITIAL_JIFFIES;
extern unsigned long wall_jiffies;
diff -Nru a/arch/arm/mach-iop310/iop310-pci.c b/arch/arm/mach-iop310/iop310-pci.c
--- a/arch/arm/mach-iop310/iop310-pci.c Tue Mar 4 19:30:07 2003
+++ b/arch/arm/mach-iop310/iop310-pci.c Tue Mar 4 19:30:07 2003
@@ -296,7 +296,7 @@
* within 3 instructions."
*
* This does not appear to be the case. With 8 NOPs after the load, we
- * see the imprecise abort occuring on the STM of iop310_sec_pci_status()
+ * see the imprecise abort occurring on the STM of iop310_sec_pci_status()
* which is about 10 instructions away.
*
* Always trust reality!
diff -Nru a/arch/arm/mach-iop310/mm.c b/arch/arm/mach-iop310/mm.c
--- a/arch/arm/mach-iop310/mm.c Tue Mar 4 19:30:05 2003
+++ b/arch/arm/mach-iop310/mm.c Tue Mar 4 19:30:05 2003
@@ -1,7 +1,7 @@
/*
* linux/arch/arm/mach-iop310/mm.c
*
- * Low level memory intialization for IOP310 based systems
+ * Low level memory initialization for IOP310 based systems
*
* Author: Nicolas Pitre
*
diff -Nru a/arch/arm/mach-pxa/Makefile b/arch/arm/mach-pxa/Makefile
--- a/arch/arm/mach-pxa/Makefile Tue Mar 4 19:30:09 2003
+++ b/arch/arm/mach-pxa/Makefile Tue Mar 4 19:30:09 2003
@@ -4,18 +4,17 @@
# Common support (must be linked before board specific support)
obj-y += generic.o irq.o dma.o
-obj-$(CONFIG_SA1111) += sa1111.o
# Specific board support
obj-$(CONFIG_ARCH_LUBBOCK) += lubbock.o
obj-$(CONFIG_ARCH_PXA_IDP) += idp.o
# Support for blinky lights
-leds-y := leds.o
-leds-$(CONFIG_ARCH_LUBBOCK) += leds-lubbock.o
-leds-$(CONFIG_ARCH_PXA_IDP) += leds-idp.o
+led-y := leds.o
+led-$(CONFIG_ARCH_LUBBOCK) += leds-lubbock.o
+led-$(CONFIG_ARCH_PXA_IDP) += leds-idp.o
-obj-$(CONFIG_LEDS) += $(leds-y)
+obj-$(CONFIG_LEDS) += $(led-y)
# Misc features
obj-$(CONFIG_PM) += pm.o sleep.o
diff -Nru a/arch/arm/mach-pxa/generic.c b/arch/arm/mach-pxa/generic.c
--- a/arch/arm/mach-pxa/generic.c Tue Mar 4 19:30:05 2003
+++ b/arch/arm/mach-pxa/generic.c Tue Mar 4 19:30:05 2003
@@ -38,7 +38,7 @@
static unsigned char L_clk_mult[32] = { 0, 27, 32, 36, 40, 45, 0, };
/* Memory Frequency to Run Mode Frequency Multiplier (M) */
-static unsigned char M_clk_mult[4] = { 0, 1, 2, 0 };
+static unsigned char M_clk_mult[4] = { 0, 1, 2, 4 };
/* Run Mode Frequency to Turbo Mode Frequency Multiplier (N) */
/* Note: we store the value N * 2 here. */
@@ -47,11 +47,12 @@
/* Crystal clock */
#define BASE_CLK 3686400
-
/*
- * Display what we were booted with.
+ * Get the clock frequency as reflected by CCCR and the turbo flag.
+ * We assume these values have been applied via a fcs.
+ * If info is not 0 we also display the current settings.
*/
-static int __init pxa_display_clocks(void)
+unsigned int get_clk_frequency_khz(int info)
{
unsigned long cccr, turbo;
unsigned int l, L, m, M, n2, N;
@@ -67,20 +68,24 @@
M = m * L;
N = n2 * M / 2;
- L += 5000;
- printk( KERN_INFO "Memory clock: %d.%02dMHz (*%d)\n",
- L / 1000000, (L % 1000000) / 10000, l );
- M += 5000;
- printk( KERN_INFO "Run Mode clock: %d.%02dMHz (*%d)\n",
- M / 1000000, (M % 1000000) / 10000, m );
- N += 5000;
- printk( KERN_INFO "Turbo Mode clock: %d.%02dMHz (*%d.%d, %sactive)\n",
- N / 1000000, (N % 1000000) / 10000, n2 / 2, (n2 % 2) * 5,
- (turbo & 1) ? "" : "in" );
+ if(info)
+ {
+ L += 5000;
+ printk( KERN_INFO "Memory clock: %d.%02dMHz (*%d)\n",
+ L / 1000000, (L % 1000000) / 10000, l );
+ M += 5000;
+ printk( KERN_INFO "Run Mode clock: %d.%02dMHz (*%d)\n",
+ M / 1000000, (M % 1000000) / 10000, m );
+ N += 5000;
+ printk( KERN_INFO "Turbo Mode clock: %d.%02dMHz (*%d.%d, %sactive)\n",
+ N / 1000000, (N % 1000000) / 10000, n2 / 2, (n2 % 2) * 5,
+ (turbo & 1) ? "" : "in" );
+ }
- return 0;
+ return (turbo & 1) ? (N/1000) : (M/1000);
}
+EXPORT_SYMBOL(get_clk_frequency_khz);
/*
* Return the current lclk requency in units of 10kHz
@@ -132,5 +137,5 @@
void __init pxa_map_io(void)
{
iotable_init(standard_io_desc, ARRAY_SIZE(standard_io_desc));
- pxa_display_clocks();
+ get_clk_frequency_khz(1);
}
diff -Nru a/arch/arm/mach-pxa/irq.c b/arch/arm/mach-pxa/irq.c
--- a/arch/arm/mach-pxa/irq.c Tue Mar 4 19:30:13 2003
+++ b/arch/arm/mach-pxa/irq.c Tue Mar 4 19:30:13 2003
@@ -86,7 +86,7 @@
}
/*
- * GPIO IRQs must be acknoledged. This is for GPIO 0 and 1.
+ * GPIO IRQs must be acknowledged. This is for GPIO 0 and 1.
*/
static void pxa_ack_low_gpio(unsigned int irq)
@@ -241,10 +241,4 @@
/* Install handler for GPIO 2-80 edge detect interrupts */
set_irq_chip(IRQ_GPIO_2_80, &pxa_internal_chip);
set_irq_chained_handler(IRQ_GPIO_2_80, pxa_gpio_demux_handler);
-
- /*
- * We generally don't want the LCD IRQ being
- * enabled as soon as we request it.
- */
- set_irq_flags(IRQ_LCD, IRQF_VALID | IRQF_NOAUTOEN);
}
diff -Nru a/arch/arm/mach-pxa/leds.c b/arch/arm/mach-pxa/leds.c
--- a/arch/arm/mach-pxa/leds.c Tue Mar 4 19:30:14 2003
+++ b/arch/arm/mach-pxa/leds.c Tue Mar 4 19:30:14 2003
@@ -27,4 +27,4 @@
return 0;
}
-__initcall(pxa_leds_init);
+core_initcall(pxa_leds_init);
diff -Nru a/arch/arm/mach-pxa/lubbock.c b/arch/arm/mach-pxa/lubbock.c
--- a/arch/arm/mach-pxa/lubbock.c Tue Mar 4 19:30:07 2003
+++ b/arch/arm/mach-pxa/lubbock.c Tue Mar 4 19:30:07 2003
@@ -13,6 +13,7 @@
*/
#include
#include
+#include
#include
#include
#include
@@ -31,7 +32,6 @@
#include
#include "generic.h"
-#include "sa1111.h"
static void lubbock_ack_irq(unsigned int irq)
{
@@ -106,24 +106,16 @@
static int __init lubbock_init(void)
{
- int ret;
-
- ret = sa1111_probe(LUBBOCK_SA1111_BASE);
- if (ret)
- return ret;
- sa1111_wake();
- sa1111_init_irq(LUBBOCK_SA1111_IRQ);
- return 0;
+ return sa1111_init(0x10000000, LUBBOCK_SA1111_IRQ);
}
-__initcall(lubbock_init);
+subsys_initcall(lubbock_init);
static struct map_desc lubbock_io_desc[] __initdata = {
/* virtual physical length type */
{ 0xf0000000, 0x08000000, 0x00100000, MT_DEVICE }, /* CPLD */
{ 0xf1000000, 0x0c000000, 0x00100000, MT_DEVICE }, /* LAN91C96 IO */
{ 0xf1100000, 0x0e000000, 0x00100000, MT_DEVICE }, /* LAN91C96 Attr */
- { 0xf4000000, 0x10000000, 0x00400000, MT_DEVICE } /* SA1111 */
};
static void __init lubbock_map_io(void)
diff -Nru a/arch/arm/mach-sa1100/irq.c b/arch/arm/mach-sa1100/irq.c
--- a/arch/arm/mach-sa1100/irq.c Tue Mar 4 19:30:14 2003
+++ b/arch/arm/mach-sa1100/irq.c Tue Mar 4 19:30:14 2003
@@ -68,7 +68,7 @@
}
/*
- * GPIO IRQs must be acknoledged. This is for IRQs from 0 to 10.
+ * GPIO IRQs must be acknowledged. This is for IRQs from 0 to 10.
*/
static void sa1100_low_gpio_ack(unsigned int irq)
{
diff -Nru a/arch/arm/mm/proc-arm6_7.S b/arch/arm/mm/proc-arm6_7.S
--- a/arch/arm/mm/proc-arm6_7.S Tue Mar 4 19:30:12 2003
+++ b/arch/arm/mm/proc-arm6_7.S Tue Mar 4 19:30:12 2003
@@ -97,7 +97,7 @@
tst r4, r4, lsr #21 @ C = bit 20
sbc r1, r1, r1 @ r1 = C - 1
and r2, r4, #15 << 24
- add pc, pc, r2, lsr #22 @ Now branch to the relevent processing routine
+ add pc, pc, r2, lsr #22 @ Now branch to the relevant processing routine
movs pc, lr
b Ldata_unknown
diff -Nru a/arch/cris/boot/rescue/head.S b/arch/cris/boot/rescue/head.S
--- a/arch/cris/boot/rescue/head.S Tue Mar 4 19:30:12 2003
+++ b/arch/cris/boot/rescue/head.S Tue Mar 4 19:30:12 2003
@@ -130,7 +130,7 @@
;; first put a jump test to give a possibility of upgrading the rescue code
;; without erasing/reflashing the sector. we put a longword of -1 here and if
- ;; its not -1, we jump using the value as jump target. since we can always
+ ;; it is not -1, we jump using the value as jump target. since we can always
;; change 1's to 0's without erasing the sector, it is possible to add new
;; code after this and altering the jumptarget in an upgrade.
diff -Nru a/arch/cris/drivers/eeprom.c b/arch/cris/drivers/eeprom.c
--- a/arch/cris/drivers/eeprom.c Tue Mar 4 19:30:07 2003
+++ b/arch/cris/drivers/eeprom.c Tue Mar 4 19:30:07 2003
@@ -815,7 +815,7 @@
i2c_outbyte( eeprom.select_cmd | 1 );
}
- if(i2c_getack());
+ if(i2c_getack())
{
break;
}
diff -Nru a/arch/cris/drivers/ethernet.c b/arch/cris/drivers/ethernet.c
--- a/arch/cris/drivers/ethernet.c Tue Mar 4 19:30:08 2003
+++ b/arch/cris/drivers/ethernet.c Tue Mar 4 19:30:08 2003
@@ -236,7 +236,7 @@
/* Network speed indication. */
static struct timer_list speed_timer = TIMER_INITIALIZER(NULL, 0, 0);
static struct timer_list clear_led_timer = TIMER_INITIALIZER(NULL, 0, 0);
-static int current_speed; /* Speed read from tranceiver */
+static int current_speed; /* Speed read from transceiver */
static int current_speed_selection; /* Speed selected by user */
static int led_next_time;
static int led_active;
@@ -276,7 +276,7 @@
static void e100_send_mdio_cmd(unsigned short cmd, int write_cmd);
static void e100_send_mdio_bit(unsigned char bit);
static unsigned char e100_receive_mdio_bit(void);
-static void e100_reset_tranceiver(void);
+static void e100_reset_transceiver(void);
static void e100_clear_network_leds(unsigned long dummy);
static void e100_set_network_leds(int active);
@@ -786,7 +786,7 @@
}
static void
-e100_reset_tranceiver(void)
+e100_reset_transceiver(void)
{
unsigned short cmd;
unsigned short data;
@@ -826,9 +826,9 @@
RESET_DMA(NETWORK_TX_DMA_NBR);
WAIT_DMA(NETWORK_TX_DMA_NBR);
- /* Reset the tranceiver. */
+ /* Reset the transceiver. */
- e100_reset_tranceiver();
+ e100_reset_transceiver();
/* and get rid of the packet that never got an interrupt */
diff -Nru a/arch/cris/drivers/lpslave/e100lpslavenet.c b/arch/cris/drivers/lpslave/e100lpslavenet.c
--- a/arch/cris/drivers/lpslave/e100lpslavenet.c Tue Mar 4 19:30:07 2003
+++ b/arch/cris/drivers/lpslave/e100lpslavenet.c Tue Mar 4 19:30:07 2003
@@ -129,7 +129,7 @@
static void e100_hardware_send_packet(unsigned long hostcmd, char *buf, int length);
static void update_rx_stats(struct net_device_stats *);
static void update_tx_stats(struct net_device_stats *);
-static void e100_reset_tranceiver(void);
+static void e100_reset_transceiver(void);
static void boot_slave(unsigned char *code);
@@ -528,7 +528,7 @@
}
static void
-e100_reset_tranceiver(void)
+e100_reset_transceiver(void)
{
/* To do: Reboot and setup slave Etrax */
}
@@ -554,9 +554,9 @@
RESET_DMA(4);
WAIT_DMA(4);
- /* Reset the tranceiver. */
+ /* Reset the transceiver. */
- e100_reset_tranceiver();
+ e100_reset_transceiver();
/* and get rid of the packet that never got an interrupt */
diff -Nru a/arch/cris/drivers/serial.c b/arch/cris/drivers/serial.c
--- a/arch/cris/drivers/serial.c Tue Mar 4 19:30:11 2003
+++ b/arch/cris/drivers/serial.c Tue Mar 4 19:30:11 2003
@@ -132,7 +132,7 @@
* Items worth noticing:
*
* No Etrax100 port 1 workarounds (does only compile on 2.4 anyway now)
- * RS485 is not ported (why cant it be done in userspace as on x86 ?)
+ * RS485 is not ported (why can't it be done in userspace as on x86 ?)
* Statistics done through async_icount - if any more stats are needed,
* that's the place to put them or in an arch-dep version of it.
* timeout_interrupt and the other fast timeout stuff not ported yet
@@ -1766,7 +1766,7 @@
B= Break character (0x00) with framing error.
E= Error byte with parity error received after B characters.
-F= "Faked" valid byte received immediatly after B characters.
+F= "Faked" valid byte received immediately after B characters.
V= Valid byte
1.
@@ -2802,7 +2802,7 @@
info->tx_ctrl |= (0x80 | 0x40); /* Set bit 7 (txd) and 6 (tr_enable) */
info->port[REG_TR_CTRL] = info->tx_ctrl;
- /* the DMA gets awfully confused if we toggle the tranceiver like this
+ /* the DMA gets awfully confused if we toggle the transceiver like this
* so we need to reset it
*/
*info->ocmdadr = 4;
diff -Nru a/arch/cris/kernel/kgdb.c b/arch/cris/kernel/kgdb.c
--- a/arch/cris/kernel/kgdb.c Tue Mar 4 19:30:13 2003
+++ b/arch/cris/kernel/kgdb.c Tue Mar 4 19:30:13 2003
@@ -152,7 +152,7 @@
* (IPL too high, disabled, ...)
*
* - The gdb stub is currently not reentrant, i.e. errors that happen therein
- * (e.g. accesing invalid memory) may not be caught correctly. This could
+ * (e.g. accessing invalid memory) may not be caught correctly. This could
* be removed in future by introducing a stack of struct registers.
*
*/
@@ -1486,7 +1486,7 @@
move.d $r0,[reg+0x62] ; Save the return address in BRP
move $usp,[reg+0x66] ; USP
-;; get the serial character (from debugport.c) and check if its a ctrl-c
+;; get the serial character (from debugport.c) and check if it is a ctrl-c
jsr getDebugChar
cmp.b 3, $r10
diff -Nru a/arch/cris/kernel/process.c b/arch/cris/kernel/process.c
--- a/arch/cris/kernel/process.c Tue Mar 4 19:30:10 2003
+++ b/arch/cris/kernel/process.c Tue Mar 4 19:30:10 2003
@@ -154,7 +154,7 @@
#if defined(CONFIG_ETRAX_WATCHDOG) && !defined(CONFIG_SVINTO_SIM)
cause_of_death = 0xbedead;
#else
- /* Since we dont plan to keep on reseting the watchdog,
+ /* Since we don't plan to keep on reseting the watchdog,
the key can be arbitrary hence three */
*R_WATCHDOG = IO_FIELD(R_WATCHDOG, key, 3) |
IO_STATE(R_WATCHDOG, enable, start);
@@ -226,7 +226,7 @@
swstack = ((struct switch_stack *)childregs) - 1;
- swstack->r9 = 0; /* parameter to ret_from_sys_call, 0 == dont restart the syscall */
+ swstack->r9 = 0; /* parameter to ret_from_sys_call, 0 == don't restart the syscall */
/* we want to return into ret_from_sys_call after the _resume */
diff -Nru a/arch/cris/kernel/setup.c b/arch/cris/kernel/setup.c
--- a/arch/cris/kernel/setup.c Tue Mar 4 19:30:13 2003
+++ b/arch/cris/kernel/setup.c Tue Mar 4 19:30:13 2003
@@ -164,7 +164,7 @@
paging_init();
- /* We dont use a command line yet, so just re-initialize it without
+ /* We don't use a command line yet, so just re-initialize it without
saving anything that might be there. */
*cmdline_p = command_line;
diff -Nru a/arch/cris/kernel/signal.c b/arch/cris/kernel/signal.c
--- a/arch/cris/kernel/signal.c Tue Mar 4 19:30:04 2003
+++ b/arch/cris/kernel/signal.c Tue Mar 4 19:30:04 2003
@@ -494,7 +494,7 @@
case -ERESTARTNOHAND:
/* ERESTARTNOHAND means that the syscall should only be
restarted if there was no handler for the signal, and since
- we only get here if there is a handler, we dont restart */
+ we only get here if there is a handler, we don't restart */
regs->r10 = -EINTR;
break;
diff -Nru a/arch/cris/kernel/time.c b/arch/cris/kernel/time.c
--- a/arch/cris/kernel/time.c Tue Mar 4 19:30:09 2003
+++ b/arch/cris/kernel/time.c Tue Mar 4 19:30:09 2003
@@ -45,7 +45,7 @@
#include
-u64 jiffies_64;
+u64 jiffies_64 = INITIAL_JIFFIES;
static int have_rtc; /* used to remember if we have an RTC or not */
diff -Nru a/arch/cris/mm/fault.c b/arch/cris/mm/fault.c
--- a/arch/cris/mm/fault.c Tue Mar 4 19:30:13 2003
+++ b/arch/cris/mm/fault.c Tue Mar 4 19:30:13 2003
@@ -170,7 +170,7 @@
if (miss) {
/* see if the pte exists at all
- * refer through current_pgd, dont use mm->pgd
+ * refer through current_pgd, don't use mm->pgd
*/
pmd = (pmd_t *)(current_pgd + pgd_index(address));
diff -Nru a/arch/cris/mm/tlb.c b/arch/cris/mm/tlb.c
--- a/arch/cris/mm/tlb.c Tue Mar 4 19:30:04 2003
+++ b/arch/cris/mm/tlb.c Tue Mar 4 19:30:04 2003
@@ -58,7 +58,7 @@
int i;
unsigned long flags;
- /* the vpn of i & 0xf is so we dont write similar TLB entries
+ /* the vpn of i & 0xf is so we don't write similar TLB entries
* in the same 4-way entry group. details..
*/
diff -Nru a/arch/i386/Kconfig b/arch/i386/Kconfig
--- a/arch/i386/Kconfig Tue Mar 4 19:30:04 2003
+++ b/arch/i386/Kconfig Tue Mar 4 19:30:04 2003
@@ -19,8 +19,13 @@
default y
config SWAP
- bool
+ bool "Support for paging of anonymous memory"
default y
+ help
+ This option allows you to choose whether you want to have support
+ for socalled swap devices or swap files in your kernel that are
+ used to provide more virtual memory than the actual RAM present
+ in your computer. If unusre say Y.
config SBUS
bool
@@ -75,6 +80,11 @@
If you don't have one of these computers, you should say N here.
+config ACPI_SRAT
+ bool
+ default y
+ depends on NUMA && X86_SUMMIT
+
config X86_BIGSMP
bool "Support for other sub-arch SMP systems with more than 8 CPUs"
help
@@ -337,11 +347,6 @@
depends on MWINCHIP3D || MWINCHIP2 || MWINCHIPC6 || MCYRIXIII || MELAN || MK6 || M586MMX || M586TSC || M586 || M486 || MVIAC3_2
default y
-config X86_TSC
- bool
- depends on MWINCHIP3D || MWINCHIP2 || MCRUSOE || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2
- default y
-
config X86_GOOD_APIC
bool
depends on MK7 || MPENTIUM4 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || MK8
@@ -483,7 +488,7 @@
# Common NUMA Features
config NUMA
bool "Numa Memory Allocation Support"
- depends on X86_NUMAQ
+ depends on (HIGHMEM64G && (X86_NUMAQ || (X86_SUMMIT && ACPI && !ACPI_HT_ONLY)))
config DISCONTIGMEM
bool
@@ -495,6 +500,11 @@
depends on NUMA
default y
+config X86_TSC
+ bool
+ depends on (MWINCHIP3D || MWINCHIP2 || MCRUSOE || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2) && !X86_NUMAQ
+ default y
+
config X86_MCE
bool "Machine Check Exception"
---help---
@@ -750,6 +760,13 @@
config HAVE_DEC_LOCK
bool
depends on (SMP || PREEMPT) && X86_CMPXCHG
+ default y
+
+# turning this on wastes a bunch of space.
+# Summit needs it only when NUMA is on
+config BOOT_IOREMAP
+ bool
+ depends on (X86_SUMMIT && NUMA)
default y
endmenu
diff -Nru a/arch/i386/boot/bootsect.S b/arch/i386/boot/bootsect.S
--- a/arch/i386/boot/bootsect.S Tue Mar 4 19:30:09 2003
+++ b/arch/i386/boot/bootsect.S Tue Mar 4 19:30:09 2003
@@ -405,7 +405,7 @@
ret
sectors: .word 0
-disksizes: .byte 36, 18, 15, 9
+disksizes: .byte 36, 21, 18, 15, 9
msg1: .byte 13, 10
.ascii "Loading"
diff -Nru a/arch/i386/kernel/Makefile b/arch/i386/kernel/Makefile
--- a/arch/i386/kernel/Makefile Tue Mar 4 19:30:05 2003
+++ b/arch/i386/kernel/Makefile Tue Mar 4 19:30:05 2003
@@ -28,6 +28,7 @@
obj-$(CONFIG_EDD) += edd.o
obj-$(CONFIG_MODULES) += module.o
obj-y += sysenter.o
+obj-$(CONFIG_ACPI_SRAT) += srat.o
EXTRA_AFLAGS := -traditional
diff -Nru a/arch/i386/kernel/acpi/boot.c b/arch/i386/kernel/acpi/boot.c
--- a/arch/i386/kernel/acpi/boot.c Tue Mar 4 19:30:13 2003
+++ b/arch/i386/kernel/acpi/boot.c Tue Mar 4 19:30:13 2003
@@ -24,6 +24,7 @@
*/
#include
+#include
#include
#include
#include
diff -Nru a/arch/i386/kernel/acpi/sleep.c b/arch/i386/kernel/acpi/sleep.c
--- a/arch/i386/kernel/acpi/sleep.c Tue Mar 4 19:30:08 2003
+++ b/arch/i386/kernel/acpi/sleep.c Tue Mar 4 19:30:08 2003
@@ -2,6 +2,7 @@
* sleep.c - x86-specific ACPI sleep support.
*
* Copyright (C) 2001-2003 Patrick Mochel
+ * Copyright (C) 2001-2003 Pavel Machek
*/
#include
@@ -34,10 +35,8 @@
*/
int acpi_save_state_mem (void)
{
-#if CONFIG_X86_PAE
- panic("S3 and PAE do not like each other for now.");
- return 1;
-#endif
+ if (!acpi_wakeup_address)
+ return 1;
init_low_mapping(swapper_pg_dir, USER_PTRS_PER_PGD);
memcpy((void *) acpi_wakeup_address, &wakeup_start, &wakeup_end - &wakeup_start);
acpi_copy_wakeup_routine(acpi_wakeup_address);
@@ -65,17 +64,24 @@
/**
* acpi_reserve_bootmem - do _very_ early ACPI initialisation
*
- * We allocate a page in low memory for the wakeup
+ * We allocate a page from the first 1MB of memory for the wakeup
* routine for when we come back from a sleep state. The
- * runtime allocator allows specification of <16M pages, but not
- * <1M pages.
+ * runtime allocator allows specification of <16MB pages, but not
+ * <1MB pages.
*/
void __init acpi_reserve_bootmem(void)
{
+ if ((&wakeup_end - &wakeup_start) > PAGE_SIZE) {
+ printk(KERN_ERR "ACPI: Wakeup code way too big, S3 disabled.\n");
+ return;
+ }
+#if CONFIG_X86_PAE
+ printk(KERN_ERR "ACPI: S3 and PAE do not like each other for now, S3 disabled.\n");
+ return;
+#endif
acpi_wakeup_address = (unsigned long)alloc_bootmem_low(PAGE_SIZE);
- if ((&wakeup_end - &wakeup_start) > PAGE_SIZE)
- printk(KERN_CRIT "ACPI: Wakeup code way too big, will crash on attempt to suspend\n");
- printk(KERN_DEBUG "ACPI: have wakeup address 0x%8.8lx\n", acpi_wakeup_address);
+ if (!acpi_wakeup_address)
+ printk(KERN_ERR "ACPI: Cannot allocate lowmem, S3 disabled.\n");
}
static int __init acpi_sleep_setup(char *str)
diff -Nru a/arch/i386/kernel/acpi/wakeup.S b/arch/i386/kernel/acpi/wakeup.S
--- a/arch/i386/kernel/acpi/wakeup.S Tue Mar 4 19:30:05 2003
+++ b/arch/i386/kernel/acpi/wakeup.S Tue Mar 4 19:30:05 2003
@@ -44,6 +44,9 @@
testl $1, video_flags - wakeup_code
jz 1f
lcall $0xc000,$3
+ movw %cs, %ax
+ movw %ax, %ds # Bios might have played with that
+ movw %ax, %ss
1:
testl $2, video_flags - wakeup_code
@@ -314,6 +317,31 @@
movl saved_context_edi, %edi
call restore_processor_state
pushl saved_context_eflags ; popfl
+ ret
+
+ENTRY(do_suspend_lowlevel_s4bios)
+ cmpl $0,4(%esp)
+ jne ret_point
+ call save_processor_state
+
+ movl %esp, saved_context_esp
+ movl %eax, saved_context_eax
+ movl %ebx, saved_context_ebx
+ movl %ecx, saved_context_ecx
+ movl %edx, saved_context_edx
+ movl %ebp, saved_context_ebp
+ movl %esi, saved_context_esi
+ movl %edi, saved_context_edi
+ pushfl ; popl saved_context_eflags
+
+ movl $ret_point,saved_eip
+ movl %esp,saved_esp
+ movl %ebp,saved_ebp
+ movl %ebx,saved_ebx
+ movl %edi,saved_edi
+ movl %esi,saved_esi
+
+ call acpi_enter_sleep_state_s4bios
ret
ALIGN
diff -Nru a/arch/i386/kernel/apic.c b/arch/i386/kernel/apic.c
--- a/arch/i386/kernel/apic.c Tue Mar 4 19:30:14 2003
+++ b/arch/i386/kernel/apic.c Tue Mar 4 19:30:14 2003
@@ -665,7 +665,6 @@
}
set_bit(X86_FEATURE_APIC, boot_cpu_data.x86_capability);
mp_lapic_addr = APIC_DEFAULT_PHYS_BASE;
- boot_cpu_physical_apicid = 0;
if (nmi_watchdog != NMI_NONE)
nmi_watchdog = NMI_LOCAL_APIC;
@@ -1154,8 +1153,7 @@
connect_bsp_APIC();
- phys_cpu_present_map = 1;
- apic_write_around(APIC_ID, boot_cpu_physical_apicid);
+ phys_cpu_present_map = 1 << boot_cpu_physical_apicid;
apic_pm_init2();
diff -Nru a/arch/i386/kernel/apm.c b/arch/i386/kernel/apm.c
--- a/arch/i386/kernel/apm.c Tue Mar 4 19:30:04 2003
+++ b/arch/i386/kernel/apm.c Tue Mar 4 19:30:04 2003
@@ -1096,7 +1096,7 @@
* @blank: on/off
*
* Attempt to blank the console, firstly by blanking just video device
- * zero, and if that fails (some BIOSes dont support it) then it blanks
+ * zero, and if that fails (some BIOSes don't support it) then it blanks
* all video devices. Typically the BIOS will do laptop backlight and
* monitor powerdown for us.
*/
diff -Nru a/arch/i386/kernel/cpu/amd.c b/arch/i386/kernel/cpu/amd.c
--- a/arch/i386/kernel/cpu/amd.c Tue Mar 4 19:30:14 2003
+++ b/arch/i386/kernel/cpu/amd.c Tue Mar 4 19:30:14 2003
@@ -151,11 +151,10 @@
case 6: /* An Athlon/Duron */
/* Bit 15 of Athlon specific MSR 15, needs to be 0
- * to enable SSE on Palomino/Morgan CPU's.
- * If the BIOS didn't enable it already, enable it
- * here.
+ * to enable SSE on Palomino/Morgan/Barton CPU's.
+ * If the BIOS didn't enable it already, enable it here.
*/
- if (c->x86_model == 6 || c->x86_model == 7) {
+ if (c->x86_model >= 6 && c->x86_model <= 10) {
if (!cpu_has(c, X86_FEATURE_XMM)) {
printk(KERN_INFO "Enabling disabled K7/SSE Support.\n");
rdmsr(MSR_K7_HWCR, l, h);
diff -Nru a/arch/i386/kernel/cpu/centaur.c b/arch/i386/kernel/cpu/centaur.c
--- a/arch/i386/kernel/cpu/centaur.c Tue Mar 4 19:30:10 2003
+++ b/arch/i386/kernel/cpu/centaur.c Tue Mar 4 19:30:10 2003
@@ -412,8 +412,9 @@
size >>= 8;
/* VIA also screwed up Nehemiah stepping 1, and made
- it return '65KB' instead of '64KB' */
- if ((c->x86==6) && (c->x86_model==9) && (c->x86_mask==1))
+ it return '65KB' instead of '64KB'
+ - Note, it seems this may only be in engineering samples. */
+ if ((c->x86==6) && (c->x86_model==9) && (c->x86_mask==1) && (size==65))
size -=1;
return size;
diff -Nru a/arch/i386/kernel/cpu/cpufreq/Kconfig b/arch/i386/kernel/cpu/cpufreq/Kconfig
--- a/arch/i386/kernel/cpu/cpufreq/Kconfig Tue Mar 4 19:30:05 2003
+++ b/arch/i386/kernel/cpu/cpufreq/Kconfig Tue Mar 4 19:30:05 2003
@@ -18,20 +18,6 @@
source "drivers/cpufreq/Kconfig"
-config CPU_FREQ_24_API
- bool "/proc/sys/cpu/ interface (2.4. / OLD)"
- depends on CPU_FREQ
- help
- This enables the /proc/sys/cpu/ sysctl interface for controlling
- CPUFreq, as known from the 2.4.-kernel patches for CPUFreq. 2.5
- uses a sysfs interface instead. Please note that some drivers do
- not work well with the 2.4. /proc/sys/cpu sysctl interface,
- so if in doubt, say N here.
-
- For details, take a look at linux/Documentation/cpufreq.
-
- If in doubt, say N.
-
config CPU_FREQ_TABLE
tristate "CPU frequency table helpers"
depends on CPU_FREQ
@@ -56,6 +42,16 @@
If in doubt, say N.
+config X86_ACPI_CPUFREQ_PROC_INTF
+ bool "/proc/acpi/processor/../performance interface (deprecated)"
+ depends on X86_ACPI_CPUFREQ && PROC_FS
+ help
+ This enables the deprecated /proc/acpi/processor/../performance
+ interface. While it is helpful for debugging, the generic,
+ cross-architecture cpufreq interfaces should be used.
+
+ If in doubt, say N.
+
config ELAN_CPUFREQ
tristate "AMD Elan"
depends on CPU_FREQ_TABLE && MELAN
@@ -139,7 +135,7 @@
config X86_LONGHAUL
tristate "VIA Cyrix III Longhaul"
- depends on CPU_FREQ
+ depends on CPU_FREQ_TABLE
help
This adds the CPUFreq driver for VIA Samuel/CyrixIII,
VIA Cyrix Samuel/C3, VIA Cyrix Ezra and VIA Cyrix Ezra-T
diff -Nru a/arch/i386/kernel/cpu/cpufreq/acpi.c b/arch/i386/kernel/cpu/cpufreq/acpi.c
--- a/arch/i386/kernel/cpu/cpufreq/acpi.c Tue Mar 4 19:30:04 2003
+++ b/arch/i386/kernel/cpu/cpufreq/acpi.c Tue Mar 4 19:30:04 2003
@@ -1,5 +1,5 @@
/*
- * acpi_processor_perf.c - ACPI Processor P-States Driver ($Revision: 71 $)
+ * acpi_processor_perf.c - ACPI Processor P-States Driver ($Revision: 1.3 $)
*
* Copyright (C) 2001, 2002 Andy Grover
* Copyright (C) 2001, 2002 Paul Diefenbaugh
@@ -50,23 +50,12 @@
MODULE_LICENSE("GPL");
-/* Performance Management */
-
static struct acpi_processor_performance *performance;
-static struct cpufreq_driver acpi_cpufreq_driver;
-
-static int acpi_processor_perf_open_fs(struct inode *inode, struct file *file);
-static struct file_operations acpi_processor_perf_fops = {
- .open = acpi_processor_perf_open_fs,
- .read = seq_read,
- .llseek = seq_lseek,
- .release = single_release,
-};
static int
acpi_processor_get_performance_control (
- struct acpi_processor *pr)
+ struct acpi_processor_performance *perf)
{
int result = 0;
acpi_status status = 0;
@@ -77,7 +66,7 @@
ACPI_FUNCTION_TRACE("acpi_processor_get_performance_control");
- status = acpi_evaluate_object(pr->handle, "_PCT", NULL, &buffer);
+ status = acpi_evaluate_object(perf->pr->handle, "_PCT", NULL, &buffer);
if(ACPI_FAILURE(status)) {
ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Error evaluating _PCT\n"));
return_VALUE(-ENODEV);
@@ -116,7 +105,7 @@
goto end;
}
- pr->performance->control_register = (u16) reg->address;
+ perf->control_register = (u16) reg->address;
/*
* status_register
@@ -143,12 +132,12 @@
goto end;
}
- pr->performance->status_register = (u16) reg->address;
+ perf->status_register = (u16) reg->address;
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"control_register[0x%04x] status_register[0x%04x]\n",
- pr->performance->control_register,
- pr->performance->status_register));
+ perf->control_register,
+ perf->status_register));
end:
acpi_os_free(buffer.pointer);
@@ -159,7 +148,7 @@
static int
acpi_processor_get_performance_states (
- struct acpi_processor* pr)
+ struct acpi_processor_performance * perf)
{
int result = 0;
acpi_status status = AE_OK;
@@ -171,7 +160,7 @@
ACPI_FUNCTION_TRACE("acpi_processor_get_performance_states");
- status = acpi_evaluate_object(pr->handle, "_PSS", NULL, &buffer);
+ status = acpi_evaluate_object(perf->pr->handle, "_PSS", NULL, &buffer);
if(ACPI_FAILURE(status)) {
ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Error evaluating _PSS\n"));
return_VALUE(-ENODEV);
@@ -188,20 +177,20 @@
pss->package.count));
if (pss->package.count > ACPI_PROCESSOR_MAX_PERFORMANCE) {
- pr->performance->state_count = ACPI_PROCESSOR_MAX_PERFORMANCE;
+ perf->state_count = ACPI_PROCESSOR_MAX_PERFORMANCE;
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"Limiting number of states to max (%d)\n",
ACPI_PROCESSOR_MAX_PERFORMANCE));
}
else
- pr->performance->state_count = pss->package.count;
+ perf->state_count = pss->package.count;
- if (pr->performance->state_count > 1)
- pr->flags.performance = 1;
+ if (perf->state_count > 1)
+ perf->pr->flags.performance = 1;
- for (i = 0; i < pr->performance->state_count; i++) {
+ for (i = 0; i < perf->state_count; i++) {
- struct acpi_processor_px *px = &(pr->performance->states[i]);
+ struct acpi_processor_px *px = &(perf->states[i]);
state.length = sizeof(struct acpi_processor_px);
state.pointer = px;
@@ -236,7 +225,7 @@
static int
acpi_processor_set_performance (
- struct acpi_processor *pr,
+ struct acpi_processor_performance *perf,
int state)
{
u16 port = 0;
@@ -246,38 +235,38 @@
ACPI_FUNCTION_TRACE("acpi_processor_set_performance");
- if (!pr)
+ if (!perf || !perf->pr)
return_VALUE(-EINVAL);
- if (!pr->flags.performance)
+ if (!perf->pr->flags.performance)
return_VALUE(-ENODEV);
- if (state >= pr->performance->state_count) {
+ if (state >= perf->state_count) {
ACPI_DEBUG_PRINT((ACPI_DB_WARN,
"Invalid target state (P%d)\n", state));
return_VALUE(-ENODEV);
}
- if (state < pr->performance_platform_limit) {
+ if (state < perf->pr->performance_platform_limit) {
ACPI_DEBUG_PRINT((ACPI_DB_WARN,
"Platform limit (P%d) overrides target state (P%d)\n",
- pr->performance->platform_limit, state));
+ perf->pr->performance_platform_limit, state));
return_VALUE(-ENODEV);
}
- if (state == pr->performance->state) {
+ if (state == perf->state) {
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"Already at target state (P%d)\n", state));
return_VALUE(0);
}
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Transitioning from P%d to P%d\n",
- pr->performance->state, state));
+ perf->state, state));
/* cpufreq frequency struct */
- cpufreq_freqs.cpu = pr->id;
- cpufreq_freqs.old = pr->performance->states[pr->performance->state].core_frequency;
- cpufreq_freqs.new = pr->performance->states[state].core_frequency;
+ cpufreq_freqs.cpu = perf->pr->id;
+ cpufreq_freqs.old = perf->states[perf->state].core_frequency;
+ cpufreq_freqs.new = perf->states[state].core_frequency;
/* notify cpufreq */
cpufreq_notify_transition(&cpufreq_freqs, CPUFREQ_PRECHANGE);
@@ -287,8 +276,8 @@
* control_register.
*/
- port = pr->performance->control_register;
- value = (u16) pr->performance->states[state].control;
+ port = perf->control_register;
+ value = (u16) perf->states[state].control;
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"Writing 0x%02x to port 0x%04x\n", value, port));
@@ -302,15 +291,15 @@
* giving up.
*/
- port = pr->performance->status_register;
+ port = perf->status_register;
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"Looking for 0x%02x from port 0x%04x\n",
- (u8) pr->performance->states[state].status, port));
+ (u8) perf->states[state].status, port));
for (i=0; i<100; i++) {
value = inb(port);
- if (value == (u8) pr->performance->states[state].status)
+ if (value == (u8) perf->states[state].status)
break;
udelay(10);
}
@@ -318,7 +307,7 @@
/* notify cpufreq */
cpufreq_notify_transition(&cpufreq_freqs, CPUFREQ_POSTCHANGE);
- if (value != pr->performance->states[state].status) {
+ if (value != perf->states[state].status) {
unsigned int tmp = cpufreq_freqs.new;
cpufreq_freqs.new = cpufreq_freqs.old;
cpufreq_freqs.old = tmp;
@@ -332,11 +321,23 @@
"Transition successful after %d microseconds\n",
i * 10));
- pr->performance->state = state;
+ perf->state = state;
return_VALUE(0);
}
+
+#ifdef CONFIG_X86_ACPI_CPUFREQ_PROC_INTF
+/* /proc/acpi/processor/../performance interface (DEPRECATED) */
+
+static int acpi_processor_perf_open_fs(struct inode *inode, struct file *file);
+static struct file_operations acpi_processor_perf_fops = {
+ .open = acpi_processor_perf_open_fs,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
static int acpi_processor_perf_seq_show(struct seq_file *seq, void *offset)
{
struct acpi_processor *pr = (struct acpi_processor *)seq->private;
@@ -347,7 +348,7 @@
if (!pr)
goto end;
- if (!pr->flags.performance) {
+ if (!pr->flags.performance || !pr->performance) {
seq_puts(seq, "\n");
goto end;
}
@@ -379,8 +380,8 @@
acpi_processor_write_performance (
struct file *file,
const char *buffer,
- unsigned long count,
- void *data)
+ size_t count,
+ loff_t *data)
{
int result = 0;
struct acpi_processor *pr = (struct acpi_processor *) data;
@@ -411,24 +412,78 @@
return_VALUE(count);
}
+static void
+acpi_cpufreq_add_file (
+ struct acpi_processor *pr)
+{
+ struct proc_dir_entry *entry = NULL;
+ struct acpi_device *device = NULL;
+
+ ACPI_FUNCTION_TRACE("acpi_cpufreq_addfile");
+
+ if (acpi_bus_get_device(pr->handle, &device))
+ return_VOID;
+
+ /* add file 'performance' [R/W] */
+ entry = create_proc_entry(ACPI_PROCESSOR_FILE_PERFORMANCE,
+ S_IFREG|S_IRUGO|S_IWUSR, acpi_device_dir(device));
+ if (!entry)
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
+ "Unable to create '%s' fs entry\n",
+ ACPI_PROCESSOR_FILE_PERFORMANCE));
+ else {
+ entry->proc_fops = &acpi_processor_perf_fops;
+ entry->proc_fops->write = acpi_processor_write_performance;
+ entry->data = acpi_driver_data(device);
+ }
+ return_VOID;
+}
+
+static void
+acpi_cpufreq_remove_file (
+ struct acpi_processor *pr)
+{
+ struct acpi_device *device = NULL;
+
+ ACPI_FUNCTION_TRACE("acpi_cpufreq_addfile");
+
+ if (acpi_bus_get_device(pr->handle, &device))
+ return_VOID;
+
+ /* remove file 'performance' */
+ remove_proc_entry(ACPI_PROCESSOR_FILE_PERFORMANCE,
+ acpi_device_dir(device));
+
+ return_VOID;
+}
+
+#else
+static void acpi_cpufreq_add_file (struct acpi_processor *pr) { return; }
+static void acpi_cpufreq_remove_file (struct acpi_processor *pr) { return; }
+#endif /* CONFIG_X86_ACPI_CPUFREQ_PROC_INTF */
+
static int
-acpi_cpufreq_setpolicy (
- struct cpufreq_policy *policy)
+acpi_cpufreq_target (
+ struct cpufreq_policy *policy,
+ unsigned int target_freq,
+ unsigned int relation)
{
- struct acpi_processor *pr = performance[policy->cpu].pr;
+ struct acpi_processor_performance *perf = &performance[policy->cpu];
unsigned int next_state = 0;
unsigned int result = 0;
ACPI_FUNCTION_TRACE("acpi_cpufreq_setpolicy");
- result = cpufreq_frequency_table_setpolicy(policy,
- &performance[policy->cpu].freq_table[pr->limit.state.px],
+ result = cpufreq_frequency_table_target(policy,
+ &perf->freq_table[perf->pr->limit.state.px],
+ target_freq,
+ relation,
&next_state);
if (result)
return_VALUE(result);
- result = acpi_processor_set_performance (pr, next_state);
+ result = acpi_processor_set_performance (perf, next_state);
return_VALUE(result);
}
@@ -439,18 +494,17 @@
struct cpufreq_policy *policy)
{
unsigned int result = 0;
- unsigned int cpu = policy->cpu;
- struct acpi_processor *pr = performance[policy->cpu].pr;
+ struct acpi_processor_performance *perf = &performance[policy->cpu];
ACPI_FUNCTION_TRACE("acpi_cpufreq_verify");
result = cpufreq_frequency_table_verify(policy,
- &performance[cpu].freq_table[pr->limit.state.px]);
+ &perf->freq_table[perf->pr->limit.state.px]);
cpufreq_verify_within_limits(
policy,
- performance[cpu].states[performance[cpu].state_count - 1].core_frequency * 1000,
- performance[cpu].states[pr->limit.state.px].core_frequency * 1000);
+ perf->states[perf->state_count - 1].core_frequency * 1000,
+ perf->states[perf->pr->limit.state.px].core_frequency * 1000);
return_VALUE(result);
}
@@ -458,7 +512,7 @@
static int
acpi_processor_get_performance_info (
- struct acpi_processor *pr)
+ struct acpi_processor_performance *perf)
{
int result = 0;
acpi_status status = AE_OK;
@@ -466,31 +520,32 @@
ACPI_FUNCTION_TRACE("acpi_processor_get_performance_info");
- if (!pr)
+ if (!perf || !perf->pr || !perf->pr->handle)
return_VALUE(-EINVAL);
- status = acpi_get_handle(pr->handle, "_PCT", &handle);
+ status = acpi_get_handle(perf->pr->handle, "_PCT", &handle);
if (ACPI_FAILURE(status)) {
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"ACPI-based processor performance control unavailable\n"));
return_VALUE(-ENODEV);
}
- result = acpi_processor_get_performance_control(pr);
+ result = acpi_processor_get_performance_control(perf);
if (result)
return_VALUE(result);
- result = acpi_processor_get_performance_states(pr);
+ result = acpi_processor_get_performance_states(perf);
if (result)
return_VALUE(result);
- result = acpi_processor_get_platform_limit(pr);
+ result = acpi_processor_get_platform_limit(perf->pr);
if (result)
return_VALUE(result);
return_VALUE(0);
}
+
static int
acpi_cpufreq_cpu_init (
struct cpufreq_policy *policy)
@@ -498,22 +553,18 @@
unsigned int i;
unsigned int cpu = policy->cpu;
struct acpi_processor *pr = NULL;
+ struct acpi_processor_performance *perf = &performance[policy->cpu];
unsigned int result = 0;
- struct proc_dir_entry *entry = NULL;
- struct acpi_device *device = NULL;
ACPI_FUNCTION_TRACE("acpi_cpufreq_cpu_init");
- acpi_processor_register_performance(&performance[cpu], &pr, cpu);
+ acpi_processor_register_performance(perf, &pr, cpu);
pr = performance[cpu].pr;
if (!pr)
return_VALUE(-ENODEV);
- if (acpi_bus_get_device(pr->handle, &device))
- return_VALUE(-ENODEV);
-
- result = acpi_processor_get_performance_info(performance[cpu].pr);
+ result = acpi_processor_get_performance_info(perf);
if (result)
return_VALUE(-ENODEV);
@@ -523,52 +574,62 @@
/* detect transition latency */
policy->cpuinfo.transition_latency = 0;
- for (i=0;i policy->cpuinfo.transition_latency)
- policy->cpuinfo.transition_latency = performance[cpu].states[i].transition_latency;
+ for (i=0;istate_count;i++) {
+ if (perf->states[i].transition_latency > policy->cpuinfo.transition_latency)
+ policy->cpuinfo.transition_latency = perf->states[i].transition_latency;
}
policy->policy = CPUFREQ_POLICY_PERFORMANCE;
+ policy->cur = perf->states[pr->limit.state.px].core_frequency * 1000;
/* table init */
- for (i=0; i<=performance[cpu].state_count; i++)
+ for (i=0; i<=perf->state_count; i++)
{
- performance[cpu].freq_table[i].index = i;
- if (ifreq_table[i].index = i;
+ if (istate_count)
+ perf->freq_table[i].frequency = perf->states[i].core_frequency * 1000;
else
- performance[cpu].freq_table[i].frequency = CPUFREQ_TABLE_END;
+ perf->freq_table[i].frequency = CPUFREQ_TABLE_END;
}
-#ifdef CONFIG_CPU_FREQ_24_API
- acpi_cpufreq_driver.cpu_cur_freq[policy->cpu] = performance[cpu].states[pr->limit.state.px].core_frequency * 1000;
-#endif
+ result = cpufreq_frequency_table_cpuinfo(policy, &perf->freq_table[0]);
- result = cpufreq_frequency_table_cpuinfo(policy, &performance[cpu].freq_table[0]);
-
- /* add file 'performance' [R/W] */
- entry = create_proc_entry(ACPI_PROCESSOR_FILE_PERFORMANCE,
- S_IFREG|S_IRUGO|S_IWUSR, acpi_device_dir(device));
- if (!entry)
- ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
- "Unable to create '%s' fs entry\n",
- ACPI_PROCESSOR_FILE_PERFORMANCE));
- else {
- entry->proc_fops = &acpi_processor_perf_fops;
- entry->write_proc = acpi_processor_write_performance;
- entry->data = acpi_driver_data(device);
- }
+ acpi_cpufreq_add_file(pr);
return_VALUE(result);
}
+static int
+acpi_cpufreq_cpu_exit (
+ struct cpufreq_policy *policy)
+{
+ struct acpi_processor *pr = performance[policy->cpu].pr;
+
+ ACPI_FUNCTION_TRACE("acpi_cpufreq_cpu_exit");
+
+ acpi_cpufreq_remove_file(pr);
+
+ return_VALUE(0);
+}
+
+
+static struct cpufreq_driver acpi_cpufreq_driver = {
+ .verify = acpi_cpufreq_verify,
+ .target = acpi_cpufreq_target,
+ .init = acpi_cpufreq_cpu_init,
+ .exit = acpi_cpufreq_cpu_exit,
+ .name = "acpi-cpufreq",
+};
+
+
static int __init
acpi_cpufreq_init (void)
{
int result = 0;
int current_state = 0;
int i = 0;
- struct acpi_processor *pr;
+ struct acpi_processor *pr = NULL;
+ struct acpi_processor_performance *perf = NULL;
ACPI_FUNCTION_TRACE("acpi_cpufreq_init");
@@ -579,10 +640,9 @@
performance = kmalloc(NR_CPUS * sizeof(struct acpi_processor_performance), GFP_KERNEL);
if (!performance)
return_VALUE(-ENOMEM);
-
memset(performance, 0, NR_CPUS * sizeof(struct acpi_processor_performance));
- /* register struct acpi_performance performance */
+ /* register struct acpi_processor_performance performance */
for (i=0; iflags.performance)
goto found_capable_cpu;
@@ -604,10 +666,11 @@
goto err;
found_capable_cpu:
- current_state = pr->performance->state;
+ perf = pr->performance;
+ current_state = perf->state;
if (current_state == pr->limit.state.px) {
- result = acpi_processor_set_performance(pr, (pr->performance->state_count - 1));
+ result = acpi_processor_set_performance(perf, (perf->state_count - 1));
if (result) {
ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Disabled P-States due to failure while switching.\n"));
result = -ENODEV;
@@ -615,7 +678,7 @@
}
}
- result = acpi_processor_set_performance(pr, pr->limit.state.px);
+ result = acpi_processor_set_performance(perf, pr->limit.state.px);
if (result) {
ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Disabled P-States due to failure while switching.\n"));
result = -ENODEV;
@@ -623,7 +686,7 @@
}
if (current_state != 0) {
- result = acpi_processor_set_performance(pr, current_state);
+ result = acpi_processor_set_performance(perf, current_state);
if (result) {
ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Disabled P-States due to failure while switching.\n"));
result = -ENODEV;
@@ -639,7 +702,7 @@
/* error handling */
err:
- /* unregister struct acpi_performance performance */
+ /* unregister struct acpi_processor_performance performance */
for (i=0; iflags.performance = 0;
@@ -647,9 +710,7 @@
performance[i].pr = NULL;
}
}
-
kfree(performance);
-
return_VALUE(result);
}
@@ -668,7 +729,7 @@
cpufreq_unregister_driver(&acpi_cpufreq_driver);
- /* unregister struct acpi_performance performance */
+ /* unregister struct acpi_processor_performance performance */
for (i=0; iflags.performance = 0;
@@ -681,15 +742,6 @@
return_VOID;
}
-
-static struct cpufreq_driver acpi_cpufreq_driver = {
- .verify = acpi_cpufreq_verify,
- .setpolicy = acpi_cpufreq_setpolicy,
- .init = acpi_cpufreq_cpu_init,
- .exit = NULL,
- .policy = NULL,
- .name = "acpi-cpufreq",
-};
late_initcall(acpi_cpufreq_init);
diff -Nru a/arch/i386/kernel/cpu/cpufreq/elanfreq.c b/arch/i386/kernel/cpu/cpufreq/elanfreq.c
--- a/arch/i386/kernel/cpu/cpufreq/elanfreq.c Tue Mar 4 19:30:07 2003
+++ b/arch/i386/kernel/cpu/cpufreq/elanfreq.c Tue Mar 4 19:30:07 2003
@@ -31,15 +31,9 @@
#define REG_CSCIR 0x22 /* Chip Setup and Control Index Register */
#define REG_CSCDR 0x23 /* Chip Setup and Control Data Register */
-static struct cpufreq_driver *elanfreq_driver;
-
/* Module parameter */
static int max_freq;
-MODULE_LICENSE("GPL");
-MODULE_AUTHOR("Robert Schwebel , Sven Geggus ");
-MODULE_DESCRIPTION("cpufreq driver for AMD's Elan CPUs");
-
struct s_elan_multiplier {
int clock; /* frequency in kHz */
int val40h; /* PMU Force Mode register */
@@ -127,11 +121,6 @@
struct cpufreq_freqs freqs;
- if (!elanfreq_driver) {
- printk(KERN_ERR "cpufreq: initialization problem or invalid target frequency\n");
- return;
- }
-
freqs.old = elanfreq_get_cpu_frequency();
freqs.new = elan_multiplier[state].clock;
freqs.cpu = 0; /* elanfreq.c is UP only driver */
@@ -187,11 +176,13 @@
return cpufreq_frequency_table_verify(policy, &elanfreq_table[0]);
}
-static int elanfreq_setpolicy (struct cpufreq_policy *policy)
+static int elanfreq_target (struct cpufreq_policy *policy,
+ unsigned int target_freq,
+ unsigned int relation)
{
unsigned int newstate = 0;
- if (cpufreq_frequency_table_setpolicy(policy, &elanfreq_table[0], &newstate))
+ if (cpufreq_frequency_table_target(policy, &elanfreq_table[0], target_freq, relation, &newstate))
return -EINVAL;
elanfreq_set_cpu_state(newstate);
@@ -204,6 +195,35 @@
* Module init and exit code
*/
+static int elanfreq_cpu_init(struct cpufreq_policy *policy)
+{
+ struct cpuinfo_x86 *c = cpu_data;
+ unsigned int i;
+
+ /* capability check */
+ if ((c->x86_vendor != X86_VENDOR_AMD) ||
+ (c->x86 != 4) || (c->x86_model!=10))
+ return -ENODEV;
+
+ /* max freq */
+ if (!max_freq)
+ max_freq = elanfreq_get_cpu_frequency();
+
+ /* table init */
+ for (i=0; (elanfreq_table[i].frequency != CPUFREQ_TABLE_END); i++) {
+ if (elanfreq_table[i].frequency > max_freq)
+ elanfreq_table[i].frequency = CPUFREQ_ENTRY_INVALID;
+ }
+
+ /* cpuinfo and default policy values */
+ policy->policy = CPUFREQ_POLICY_PERFORMANCE;
+ policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
+ policy->cur = elanfreq_get_cpu_frequency();
+
+ return cpufreq_frequency_table_cpuinfo(policy, &elanfreq_table[0]);;
+}
+
+
#ifndef MODULE
/**
* elanfreq_setup - elanfreq command line parameter parsing
@@ -224,11 +244,18 @@
__setup("elanfreq=", elanfreq_setup);
#endif
+
+static struct cpufreq_driver elanfreq_driver = {
+ .verify = elanfreq_verify,
+ .target = elanfreq_target,
+ .init = elanfreq_cpu_init,
+ .name = "elanfreq",
+};
+
+
static int __init elanfreq_init(void)
{
struct cpuinfo_x86 *c = cpu_data;
- struct cpufreq_driver *driver;
- int ret, i;
/* Test if we have the right hardware */
if ((c->x86_vendor != X86_VENDOR_AMD) ||
@@ -238,63 +265,22 @@
return -ENODEV;
}
- driver = kmalloc(sizeof(struct cpufreq_driver) +
- NR_CPUS * sizeof(struct cpufreq_policy), GFP_KERNEL);
- if (!driver)
- return -ENOMEM;
- memset(driver, 0, sizeof(struct cpufreq_driver) +
- NR_CPUS * sizeof(struct cpufreq_policy));
-
- driver->policy = (struct cpufreq_policy *) (driver + 1);
-
- if (!max_freq)
- max_freq = elanfreq_get_cpu_frequency();
-
- /* table init */
- for (i=0; (elanfreq_table[i].frequency != CPUFREQ_TABLE_END); i++) {
- if (elanfreq_table[i].frequency > max_freq)
- elanfreq_table[i].frequency = CPUFREQ_ENTRY_INVALID;
- }
-
-#ifdef CONFIG_CPU_FREQ_24_API
- driver->cpu_cur_freq[0] = elanfreq_get_cpu_frequency();
-#endif
-
- driver->verify = &elanfreq_verify;
- driver->setpolicy = &elanfreq_setpolicy;
- strncpy(driver->name, "elanfreq", CPUFREQ_NAME_LEN);
-
- driver->policy[0].cpu = 0;
- ret = cpufreq_frequency_table_cpuinfo(&driver->policy[0], &elanfreq_table[0]);
- if (ret) {
- kfree(driver);
- return ret;
- }
- driver->policy[0].policy = CPUFREQ_POLICY_PERFORMANCE;
- driver->policy[0].cpuinfo.transition_latency = CPUFREQ_ETERNAL;
-
- elanfreq_driver = driver;
-
- ret = cpufreq_register(driver);
- if (ret) {
- elanfreq_driver = NULL;
- kfree(driver);
- }
-
- return ret;
+ return cpufreq_register_driver(&elanfreq_driver);
}
static void __exit elanfreq_exit(void)
{
- if (elanfreq_driver) {
- cpufreq_unregister();
- kfree(elanfreq_driver);
- }
+ cpufreq_unregister_driver(&elanfreq_driver);
}
-module_init(elanfreq_init);
-module_exit(elanfreq_exit);
MODULE_PARM (max_freq, "i");
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Robert Schwebel , Sven Geggus ");
+MODULE_DESCRIPTION("cpufreq driver for AMD's Elan CPUs");
+
+module_init(elanfreq_init);
+module_exit(elanfreq_exit);
diff -Nru a/arch/i386/kernel/cpu/cpufreq/gx-suspmod.c b/arch/i386/kernel/cpu/cpufreq/gx-suspmod.c
--- a/arch/i386/kernel/cpu/cpufreq/gx-suspmod.c Tue Mar 4 19:30:08 2003
+++ b/arch/i386/kernel/cpu/cpufreq/gx-suspmod.c Tue Mar 4 19:30:08 2003
@@ -116,7 +116,6 @@
struct pci_dev *cs55x0;
};
-static struct cpufreq_driver *gx_driver;
static struct gxfreq_params *gx_params;
static int stock_freq;
@@ -345,7 +344,7 @@
unsigned int tmp_freq = 0;
u8 tmp1, tmp2;
- if (!gx_driver || !stock_freq || !policy)
+ if (!stock_freq || !policy)
return -EINVAL;
policy->cpu = 0;
@@ -375,33 +374,71 @@
}
/*
- * cpufreq_gx_setpolicy:
+ * cpufreq_gx_target:
*
*/
-static int cpufreq_gx_setpolicy(struct cpufreq_policy *policy)
+static int cpufreq_gx_target(struct cpufreq_policy *policy,
+ unsigned int target_freq,
+ unsigned int relation)
{
+ u8 tmp1, tmp2;
+ unsigned int tmp_freq;
- if (!gx_driver || !stock_freq || !policy)
+ if (!stock_freq || !policy)
return -EINVAL;
policy->cpu = 0;
- if (policy->policy == CPUFREQ_POLICY_POWERSAVE) {
- /* here we need to make sure that we don't set the
- * frequency below policy->min (see comment in
- * cpufreq_gx_verify() - guarantee of processing
- * capacity.
- */
- u8 tmp1, tmp2;
- unsigned int tmp_freq = gx_validate_speed(policy->min, &tmp1, &tmp2);
- while (tmp_freq < policy->min) {
- tmp_freq += stock_freq / max_duration;
- tmp_freq = gx_validate_speed(tmp_freq, &tmp1, &tmp2);
- }
- gx_set_cpuspeed(tmp_freq);
+ tmp_freq = gx_validate_speed(target_freq, &tmp1, &tmp2);
+ while (tmp_freq < policy->min) {
+ tmp_freq += stock_freq / max_duration;
+ tmp_freq = gx_validate_speed(tmp_freq, &tmp1, &tmp2);
+ }
+ while (tmp_freq > policy->max) {
+ tmp_freq -= stock_freq / max_duration;
+ tmp_freq = gx_validate_speed(tmp_freq, &tmp1, &tmp2);
+ }
+
+ gx_set_cpuspeed(tmp_freq);
+
+ return 0;
+}
+
+static int cpufreq_gx_cpu_init(struct cpufreq_policy *policy)
+{
+ int maxfreq, curfreq;
+
+ if (!policy || policy->cpu != 0)
+ return -ENODEV;
+
+ /* determine maximum frequency */
+ if (pci_busclk) {
+ maxfreq = pci_busclk * gx_freq_mult[getCx86(CX86_DIR1) & 0x0f];
+ } else if (cpu_khz) {
+ maxfreq = cpu_khz;
} else {
- gx_set_cpuspeed(policy->max);
+ maxfreq = 30000 * gx_freq_mult[getCx86(CX86_DIR1) & 0x0f];
}
+ stock_freq = maxfreq;
+ curfreq = gx_get_cpuspeed();
+
+ dprintk("cpu max frequency is %d.\n", maxfreq);
+ dprintk("cpu current frequency is %dkHz.\n",curfreq);
+
+ /* setup basic struct for cpufreq API */
+ policy->cpu = 0;
+
+ if (max_duration < POLICY_MIN_DIV)
+ policy->min = maxfreq / max_duration;
+ else
+ policy->min = maxfreq / POLICY_MIN_DIV;
+ policy->max = maxfreq;
+ policy->cur = curfreq;
+ policy->policy = CPUFREQ_POLICY_PERFORMANCE;
+ policy->cpuinfo.min_freq = maxfreq / max_duration;
+ policy->cpuinfo.max_freq = maxfreq;
+ policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
+
return 0;
}
@@ -409,11 +446,16 @@
* cpufreq_gx_init:
* MediaGX/Geode GX initilize cpufreq driver
*/
+static struct cpufreq_driver gx_suspmod_driver = {
+ .verify = cpufreq_gx_verify,
+ .target = cpufreq_gx_target,
+ .init = cpufreq_gx_cpu_init,
+ .name = "gx-suspmod",
+};
static int __init cpufreq_gx_init(void)
{
- int maxfreq,ret,curfreq;
- struct cpufreq_driver *driver;
+ int ret;
struct gxfreq_params *params;
struct pci_dev *gx_pci;
u32 class_rev;
@@ -428,21 +470,13 @@
dprintk("geode suspend modulation available.\n");
- driver = kmalloc(sizeof(struct cpufreq_driver) + NR_CPUS * sizeof(struct cpufreq_policy), GFP_KERNEL);
- if (driver == NULL)
- return -ENOMEM;
- memset(driver, 0, sizeof(struct cpufreq_driver) +
- NR_CPUS * sizeof(struct cpufreq_policy));
-
params = kmalloc(sizeof(struct gxfreq_params), GFP_KERNEL);
- if (params == NULL) {
- kfree(driver);
+ if (params == NULL)
return -ENOMEM;
- }
memset(params, 0, sizeof(struct gxfreq_params));
- driver->policy = (struct cpufreq_policy *)(driver + 1);
params->cs55x0 = gx_pci;
+ gx_params = params;
/* keep cs55x0 configurations */
pci_read_config_byte(params->cs55x0, PCI_SUSCFG, &(params->pci_suscfg));
@@ -453,45 +487,7 @@
pci_read_config_dword(params->cs55x0, PCI_CLASS_REVISION, &class_rev);
params->pci_rev = class_rev && 0xff;
- gx_params = params;
-
- /* determine maximum frequency */
- if (pci_busclk) {
- maxfreq = pci_busclk * gx_freq_mult[getCx86(CX86_DIR1) & 0x0f];
- } else if (cpu_khz) {
- maxfreq = cpu_khz;
- } else {
- maxfreq = 30000 * gx_freq_mult[getCx86(CX86_DIR1) & 0x0f];
- }
- stock_freq = maxfreq;
- curfreq = gx_get_cpuspeed();
-
- dprintk("cpu max frequency is %d.\n", maxfreq);
- dprintk("cpu current frequency is %dkHz.\n",curfreq);
-
- /* setup basic struct for cpufreq API */
-#ifdef CONFIG_CPU_FREQ_24_API
- driver->cpu_cur_freq[0] = curfreq;
-#endif
- driver->policy[0].cpu = 0;
-
- if (max_duration < POLICY_MIN_DIV)
- driver->policy[0].min = maxfreq / max_duration;
- else
- driver->policy[0].min = maxfreq / POLICY_MIN_DIV;
- driver->policy[0].max = maxfreq;
- driver->policy[0].policy = CPUFREQ_POLICY_PERFORMANCE;
- driver->policy[0].cpuinfo.min_freq = maxfreq / max_duration;
- driver->policy[0].cpuinfo.max_freq = maxfreq;
- driver->policy[0].cpuinfo.transition_latency = CPUFREQ_ETERNAL;
- driver->verify = &cpufreq_gx_verify;
- driver->setpolicy = &cpufreq_gx_setpolicy;
- strncpy(driver->name, "gx-suspmod", CPUFREQ_NAME_LEN);
-
- gx_driver = driver;
-
- if ((ret = cpufreq_register(driver))) {
- kfree(driver);
+ if ((ret = cpufreq_register_driver(&gx_suspmod_driver))) {
kfree(params);
return ret; /* register error! */
}
@@ -501,13 +497,8 @@
static void __exit cpufreq_gx_exit(void)
{
- if (gx_driver) {
- /* disable throttling */
- gx_set_cpuspeed(stock_freq);
- cpufreq_unregister();
- kfree(gx_driver);
- kfree(gx_params);
- }
+ cpufreq_unregister_driver(&gx_suspmod_driver);
+ kfree(gx_params);
}
MODULE_AUTHOR ("Hiroshi Miura ");
diff -Nru a/arch/i386/kernel/cpu/cpufreq/longhaul.c b/arch/i386/kernel/cpu/cpufreq/longhaul.c
--- a/arch/i386/kernel/cpu/cpufreq/longhaul.c Tue Mar 4 19:30:04 2003
+++ b/arch/i386/kernel/cpu/cpufreq/longhaul.c Tue Mar 4 19:30:04 2003
@@ -1,5 +1,5 @@
/*
- * $Id: longhaul.c,v 1.77 2002/10/31 21:17:40 db Exp $
+ * $Id: longhaul.c,v 1.87 2003/02/22 10:23:46 db Exp $
*
* (C) 2001 Dave Jones.
* (C) 2002 Padraig Brady.
@@ -48,6 +48,7 @@
/* Module parameters */
+static int prefer_slow_fsb;
static int dont_scale_voltage;
static int dont_scale_fsb;
static int current_fsb;
@@ -237,7 +238,6 @@
/* fsb values to favour high fsb speed (for e.g. if lowering CPU
freq because of heat, but want to maintain highest performance possible) */
static unsigned int perf_fsb_table[] = { 133, 100, 66, -1 };
-static unsigned int *fsb_search_table;
/* Voltage scales. Div by 1000 to get actual voltage. */
static int __initdata vrm85scales[32] = {
@@ -260,7 +260,7 @@
static int voltage_table[32];
static int highest_speed, lowest_speed; /* kHz */
static int longhaul; /* version. */
-static struct cpufreq_driver *longhaul_driver;
+static struct cpufreq_frequency_table *longhaul_table;
static int longhaul_get_cpu_fsb (void)
@@ -428,7 +428,7 @@
}
-static void __init longhaul_get_ranges (void)
+static int __init longhaul_get_ranges (void)
{
unsigned long lo, hi, invalue;
unsigned int minmult=0, maxmult=0, minfsb=0, maxfsb=0;
@@ -436,6 +436,9 @@
50,30,40,100,55,35,45,95,90,70,80,60,120,75,85,65,
-1,110,120,-1,135,115,125,105,130,150,160,140,-1,155,-1,145 };
unsigned int fsb_table[4] = { 133, 100, -1, 66 };
+ unsigned int fsbcount = 1;
+ unsigned int i, j, k = 0;
+ static unsigned int *fsb_search_table;
switch (longhaul) {
case 1:
@@ -472,6 +475,11 @@
dprintk (KERN_INFO "longhaul: Min FSB=%d Max FSB=%d\n",
minfsb, maxfsb);
+ fsbcount = 0;
+ for (i=0;i<4;i++) {
+ if((fsb_table[i] >= minfsb) && (fsb_table[i] <= maxfsb))
+ fsbcount++;
+ }
} else {
minfsb = maxfsb = current_fsb;
}
@@ -480,11 +488,37 @@
highest_speed = maxmult * maxfsb * 100;
lowest_speed = minmult * minfsb * 100;
-
dprintk (KERN_INFO "longhaul: MinMult(x10)=%d MaxMult(x10)=%d\n",
- minmult, maxmult);
+ minmult, maxmult);
dprintk (KERN_INFO "longhaul: Lowestspeed=%d Highestspeed=%d\n",
- lowest_speed, highest_speed);
+ lowest_speed, highest_speed);
+
+ longhaul_table = kmalloc((numscales * fsbcount + 1) * sizeof(struct cpufreq_frequency_table), GFP_KERNEL);
+ if(!longhaul_table)
+ return -ENOMEM;
+
+ if (prefer_slow_fsb)
+ fsb_search_table = perf_fsb_table; // yep, this is right: the last entry is preferred by cpufreq_frequency_table_* ...
+ else
+ fsb_search_table = power_fsb_table;
+
+ for (i=0; (i<4); i++) {
+ if ((fsb_search_table[i] > maxfsb) || (fsb_search_table[i] < minfsb) || (fsb_search_table[i] == -1))
+ continue;
+ for (j=0; (j maxmult) || (clock_ratio[j] < minmult) || (clock_ratio[j] == -1))
+ continue;
+ longhaul_table[k].frequency= clock_ratio[j] * fsb_search_table[i] * 100;
+ longhaul_table[k].index = (j << 8) | (i);
+ k++;
+ }
+ }
+
+ longhaul_table[k].frequency = CPUFREQ_TABLE_END;
+ if (!k)
+ return -EINVAL;
+
+ return 0;
}
@@ -523,182 +557,34 @@
}
-static inline unsigned int longhaul_statecount_fsb(struct cpufreq_policy *policy, unsigned int fsb) {
- unsigned int i, count = 0;
-
- for(i=0; imax) &&
- ((clock_ratio[i] * fsb * 100) >= policy->min))
- count++;
- }
-
- return count;
-}
-
-
static int longhaul_verify(struct cpufreq_policy *policy)
{
- unsigned int number_states = 0;
- unsigned int i;
- unsigned int fsb_index = 0;
- unsigned int tmpfreq = 0;
- unsigned int newmax = -1;
-
- if (!policy || !longhaul_driver)
- return -EINVAL;
-
- policy->cpu = 0;
- cpufreq_verify_within_limits(policy, lowest_speed, highest_speed);
-
- if (can_scale_fsb==1) {
- for (fsb_index=0; fsb_search_table[fsb_index]!=-1; fsb_index++)
- number_states += longhaul_statecount_fsb(policy, fsb_search_table[fsb_index]);
- } else
- number_states = longhaul_statecount_fsb(policy, current_fsb);
-
- if (number_states)
- return 0;
-
- /* get frequency closest above current policy->max */
- if (can_scale_fsb==1) {
- for (fsb_index=0; fsb_search_table[fsb_index] != -1; fsb_index++)
- for(i=0; i policy->max) &&
- (tmpfreq < newmax))
- newmax = tmpfreq;
- }
- } else {
- for(i=0; i policy->max) &&
- (tmpfreq < newmax))
- newmax = tmpfreq;
- }
- }
-
- policy->max = newmax;
-
- cpufreq_verify_within_limits(policy, lowest_speed, highest_speed);
-
- return 0;
-}
-
-
-static int longhaul_get_best_freq_for_fsb(struct cpufreq_policy *policy,
- unsigned int min_mult,
- unsigned int max_mult,
- unsigned int fsb,
- unsigned int *new_mult)
-{
- unsigned int optimal = 0;
- unsigned int found_optimal = 0;
- unsigned int i;
-
- switch(policy->policy) {
- case CPUFREQ_POLICY_POWERSAVE:
- optimal = max_mult;
- break;
- case CPUFREQ_POLICY_PERFORMANCE:
- optimal = min_mult;
- }
-
- for(i=0; i policy->max) ||
- (freq < policy->min))
- continue;
- switch(policy->policy) {
- case CPUFREQ_POLICY_POWERSAVE:
- if (clock_ratio[i] < clock_ratio[optimal]) {
- found_optimal = 1;
- optimal = i;
- }
- break;
- case CPUFREQ_POLICY_PERFORMANCE:
- if (clock_ratio[i] > clock_ratio[optimal]) {
- found_optimal = 1;
- optimal = i;
- }
- break;
- }
- }
-
- if (found_optimal) {
- *new_mult = optimal;
- return 1;
- }
- return 0;
+ return cpufreq_frequency_table_verify(policy, longhaul_table);
}
-static int longhaul_setpolicy (struct cpufreq_policy *policy)
+static int longhaul_target (struct cpufreq_policy *policy,
+ unsigned int target_freq,
+ unsigned int relation)
{
- unsigned int i;
- unsigned int fsb_index = 0;
- unsigned int new_fsb = 0;
- unsigned int new_clock_ratio = 0;
- unsigned int min_mult = 0;
- unsigned int max_mult = 0;
-
+ unsigned int table_index = 0;
+ unsigned int new_fsb = 0;
+ unsigned int new_clock_ratio = 0;
- if (!longhaul_driver)
+ if (cpufreq_frequency_table_target(policy, longhaul_table, target_freq, relation, &table_index))
return -EINVAL;
- if (policy->policy==CPUFREQ_POLICY_PERFORMANCE)
- fsb_search_table = perf_fsb_table;
- else
- fsb_search_table = power_fsb_table;
-
- for(i=0;i clock_ratio[i])
- min_mult = i;
- }
-
- if (can_scale_fsb==1) {
- unsigned int found = 0;
- for (fsb_index=0; fsb_search_table[fsb_index]!=-1; fsb_index++)
- {
- if (longhaul_get_best_freq_for_fsb(policy,
- min_mult, max_mult,
- fsb_search_table[fsb_index],
- &new_clock_ratio)) {
- new_fsb = fsb_search_table[fsb_index];
- break;
- }
- }
- if (!found)
- return -EINVAL;
- } else {
- new_fsb = current_fsb;
- if (!longhaul_get_best_freq_for_fsb(policy, min_mult,
- max_mult, new_fsb, &new_clock_ratio))
- return -EINVAL;
- }
-
+ new_clock_ratio = longhaul_table[table_index].index & 0xFF;
+ new_fsb = power_fsb_table[(longhaul_table[table_index].index & 0xFF00) >> 8];
+
longhaul_setstate(new_clock_ratio, new_fsb);
return 0;
}
-
-static int __init longhaul_init (void)
+static int longhaul_cpu_init (struct cpufreq_policy *policy)
{
struct cpuinfo_x86 *c = cpu_data;
- unsigned int currentspeed;
- static int currentmult;
- unsigned long lo, hi;
- int ret;
- struct cpufreq_driver *driver;
if ((c->x86_vendor != X86_VENDOR_CENTAUR) || (c->x86 !=6) )
return -ENODEV;
@@ -733,21 +619,12 @@
memcpy (eblcr_table, c5m_eblcr, sizeof(c5m_eblcr));
break;
- default:
- printk (KERN_INFO "longhaul: Unknown VIA CPU. Contact davej@suse.de\n");
- return -ENODEV;
}
printk (KERN_INFO "longhaul: VIA CPU detected. Longhaul version %d supported\n", longhaul);
- current_fsb = longhaul_get_cpu_fsb();
- currentmult = longhaul_get_cpu_mult();
- currentspeed = currentmult * current_fsb * 100;
-
- dprintk (KERN_INFO "longhaul: CPU currently at %dMHz (%d x %d.%d)\n",
- (currentspeed/1000), current_fsb, currentmult/10, currentmult%10);
-
if (longhaul==2 || longhaul==3) {
+ unsigned long lo, hi;
rdmsr (MSR_VIA_LONGHAUL, lo, hi);
if ((lo & (1<<0)) && (dont_scale_voltage==0))
longhaul_setup_voltagescaling (lo, hi);
@@ -756,57 +633,53 @@
can_scale_fsb = 1;
}
- longhaul_get_ranges();
-
- driver = kmalloc(sizeof(struct cpufreq_driver) +
- NR_CPUS * sizeof(struct cpufreq_policy), GFP_KERNEL);
- if (!driver)
+ if (longhaul_get_ranges())
return -ENOMEM;
- memset(driver, 0, sizeof(struct cpufreq_driver) +
- NR_CPUS * sizeof(struct cpufreq_policy));
- driver->policy = (struct cpufreq_policy *) (driver + 1);
+ policy->policy = CPUFREQ_POLICY_PERFORMANCE;
+ policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
-#ifdef CONFIG_CPU_FREQ_24_API
- driver->cpu_cur_freq[0] = currentspeed;
-#endif
+ policy->cur = (unsigned int) (longhaul_get_cpu_fsb() * longhaul_get_cpu_mult() * 100);
+
+ return cpufreq_frequency_table_cpuinfo(policy, longhaul_table);
+}
+
+static struct cpufreq_driver longhaul_driver = {
+ .verify = longhaul_verify,
+ .target = longhaul_target,
+ .init = longhaul_cpu_init,
+ .name = "longhaul",
+};
- driver->verify = &longhaul_verify;
- driver->setpolicy = &longhaul_setpolicy;
+static int __init longhaul_init (void)
+{
+ struct cpuinfo_x86 *c = cpu_data;
- strncpy(driver->name, "longhaul", CPUFREQ_NAME_LEN);
+ if ((c->x86_vendor != X86_VENDOR_CENTAUR) || (c->x86 !=6) )
+ return -ENODEV;
- driver->policy[0].cpu = 0;
- driver->policy[0].min = (unsigned int) lowest_speed;
- driver->policy[0].max = (unsigned int) highest_speed;
- driver->policy[0].policy = CPUFREQ_POLICY_PERFORMANCE;
- driver->policy[0].cpuinfo.min_freq = (unsigned int) lowest_speed;
- driver->policy[0].cpuinfo.max_freq = (unsigned int) highest_speed;
- driver->policy[0].cpuinfo.transition_latency = CPUFREQ_ETERNAL;
-
- longhaul_driver = driver;
-
- ret = cpufreq_register(driver);
- if (ret) {
- longhaul_driver = NULL;
- kfree(driver);
+ switch (c->x86_model) {
+ case 6 ... 7:
+ return cpufreq_register_driver(&longhaul_driver);
+ case 8:
+ return -ENODEV;
+ default:
+ printk (KERN_INFO "longhaul: Unknown VIA CPU. Contact davej@suse.de\n");
}
- return ret;
+ return -ENODEV;
}
-
static void __exit longhaul_exit (void)
{
- if (longhaul_driver) {
- cpufreq_unregister();
- kfree(longhaul_driver);
- }
+ cpufreq_unregister_driver(&longhaul_driver);
+ kfree(longhaul_table);
}
MODULE_PARM (dont_scale_fsb, "i");
MODULE_PARM (dont_scale_voltage, "i");
MODULE_PARM (current_fsb, "i");
+MODULE_PARM (prefer_slow_fsb, "i");
MODULE_AUTHOR ("Dave Jones ");
MODULE_DESCRIPTION ("Longhaul driver for VIA Cyrix processors.");
diff -Nru a/arch/i386/kernel/cpu/cpufreq/longrun.c b/arch/i386/kernel/cpu/cpufreq/longrun.c
--- a/arch/i386/kernel/cpu/cpufreq/longrun.c Tue Mar 4 19:30:11 2003
+++ b/arch/i386/kernel/cpu/cpufreq/longrun.c Tue Mar 4 19:30:11 2003
@@ -1,5 +1,5 @@
/*
- * $Id: longrun.c,v 1.22 2003/02/10 17:31:50 db Exp $
+ * $Id: longrun.c,v 1.25 2003/02/28 16:03:50 db Exp $
*
* (C) 2002 - 2003 Dominik Brodowski
*
@@ -133,7 +133,7 @@
* longrun_determine_freqs - determines the lowest and highest possible core frequency
*
* Determines the lowest and highest possible core frequencies on this CPU.
- * This is neccessary to calculate the performance percentage according to
+ * This is necessary to calculate the performance percentage according to
* TMTA rules:
* performance_pctg = (target_freq - low_freq)/(high_freq - low_freq)
*/
@@ -244,10 +244,6 @@
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
longrun_get_policy(policy);
-#ifdef CONFIG_CPU_FREQ_24_API
- longrun_driver.cpu_cur_freq[policy->cpu] = longrun_low_freq; /* dummy value */
-#endif
-
return 0;
}
diff -Nru a/arch/i386/kernel/cpu/cpufreq/p4-clockmod.c b/arch/i386/kernel/cpu/cpufreq/p4-clockmod.c
--- a/arch/i386/kernel/cpu/cpufreq/p4-clockmod.c Tue Mar 4 19:30:13 2003
+++ b/arch/i386/kernel/cpu/cpufreq/p4-clockmod.c Tue Mar 4 19:30:13 2003
@@ -49,8 +49,6 @@
static int has_N44_O17_errata[NR_CPUS];
static int stock_freq;
-static struct cpufreq_driver p4clockmod_driver;
-
static int cpufreq_p4_setdc(unsigned int cpu, unsigned int newstate)
{
@@ -220,9 +218,7 @@
/* cpuinfo and default policy values */
policy->policy = CPUFREQ_POLICY_PERFORMANCE;
policy->cpuinfo.transition_latency = 1000;
-#ifdef CONFIG_CPU_FREQ_24_API
- p4clockmod_driver.cpu_cur_freq[policy->cpu] = stock_freq;
-#endif
+ policy->cur = stock_freq;
return cpufreq_frequency_table_cpuinfo(policy, &p4clockmod_table[0]);
}
diff -Nru a/arch/i386/kernel/cpu/cpufreq/powernow-k6.c b/arch/i386/kernel/cpu/cpufreq/powernow-k6.c
--- a/arch/i386/kernel/cpu/cpufreq/powernow-k6.c Tue Mar 4 19:30:04 2003
+++ b/arch/i386/kernel/cpu/cpufreq/powernow-k6.c Tue Mar 4 19:30:04 2003
@@ -1,9 +1,9 @@
/*
- * $Id: powernow-k6.c,v 1.36 2002/10/31 21:17:40 db Exp $
+ * $Id: powernow-k6.c,v 1.48 2003/02/22 10:23:46 db Exp $
* This file was part of Powertweak Linux (http://powertweak.sf.net)
* and is shared with the Linux Kernel module.
*
- * (C) 2000-2002 Dave Jones, Arjan van de Ven, Janne Pänkälä, Dominik Brodowski.
+ * (C) 2000-2003 Dave Jones, Arjan van de Ven, Janne Pänkälä, Dominik Brodowski.
*
* Licensed under the terms of the GNU GPL License version 2.
*
@@ -25,7 +25,6 @@
#define POWERNOW_IOPORT 0xfff0 /* it doesn't matter where, as long
as it is unused */
-static struct cpufreq_driver *powernow_driver;
static unsigned int busfreq; /* FSB, in 10 kHz */
static unsigned int max_multiplier;
@@ -77,8 +76,8 @@
unsigned long msrval;
struct cpufreq_freqs freqs;
- if (!powernow_driver) {
- printk(KERN_ERR "cpufreq: initialization problem or invalid target frequency\n");
+ if (clock_ratio[best_i].index > max_multiplier) {
+ printk(KERN_ERR "cpufreq: invalid target frequency\n");
return;
}
@@ -126,11 +125,13 @@
*
* sets a new CPUFreq policy
*/
-static int powernow_k6_setpolicy (struct cpufreq_policy *policy)
+static int powernow_k6_target (struct cpufreq_policy *policy,
+ unsigned int target_freq,
+ unsigned int relation)
{
unsigned int newstate = 0;
- if (cpufreq_frequency_table_setpolicy(policy, &clock_ratio[0], &newstate))
+ if (cpufreq_frequency_table_target(policy, &clock_ratio[0], target_freq, relation, &newstate))
return -EINVAL;
powernow_k6_set_state(newstate);
@@ -139,6 +140,59 @@
}
+static int powernow_k6_cpu_init(struct cpufreq_policy *policy)
+{
+ struct cpuinfo_x86 *c = cpu_data;
+ unsigned int i;
+
+ /* capability check */
+ if ((c->x86_vendor != X86_VENDOR_AMD) || (c->x86 != 5) ||
+ ((c->x86_model != 12) && (c->x86_model != 13)))
+ return -ENODEV;
+ if (policy->cpu != 0)
+ return -ENODEV;
+
+ /* get frequencies */
+ max_multiplier = powernow_k6_get_cpu_multiplier();
+ busfreq = cpu_khz / max_multiplier;
+
+ /* table init */
+ for (i=0; (clock_ratio[i].frequency != CPUFREQ_TABLE_END); i++) {
+ if (clock_ratio[i].index > max_multiplier)
+ clock_ratio[i].frequency = CPUFREQ_ENTRY_INVALID;
+ else
+ clock_ratio[i].frequency = busfreq * clock_ratio[i].index;
+ }
+
+ /* cpuinfo and default policy values */
+ policy->policy = CPUFREQ_POLICY_PERFORMANCE;
+ policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
+ policy->cur = busfreq * max_multiplier;
+
+ return cpufreq_frequency_table_cpuinfo(policy, &clock_ratio[0]);
+}
+
+
+static int powernow_k6_cpu_exit(struct cpufreq_policy *policy)
+{
+ unsigned int i;
+ for (i=0; i<8; i++) {
+ if (i==max_multiplier)
+ powernow_k6_set_state(i);
+ }
+ return 0;
+}
+
+
+static struct cpufreq_driver powernow_k6_driver = {
+ .verify = powernow_k6_verify,
+ .target = powernow_k6_target,
+ .init = powernow_k6_cpu_init,
+ .exit = powernow_k6_cpu_exit,
+ .name = "powernow-k6",
+};
+
+
/**
* powernow_k6_init - initializes the k6 PowerNow! CPUFreq driver
*
@@ -149,68 +203,22 @@
static int __init powernow_k6_init(void)
{
struct cpuinfo_x86 *c = cpu_data;
- struct cpufreq_driver *driver;
- unsigned int result;
- unsigned int i;
if ((c->x86_vendor != X86_VENDOR_AMD) || (c->x86 != 5) ||
((c->x86_model != 12) && (c->x86_model != 13)))
return -ENODEV;
- max_multiplier = powernow_k6_get_cpu_multiplier();
- busfreq = cpu_khz / max_multiplier;
-
if (!request_region(POWERNOW_IOPORT, 16, "PowerNow!")) {
printk("cpufreq: PowerNow IOPORT region already used.\n");
return -EIO;
}
- /* initialization of main "cpufreq" code*/
- driver = kmalloc(sizeof(struct cpufreq_driver) +
- NR_CPUS * sizeof(struct cpufreq_policy), GFP_KERNEL);
- if (!driver) {
+ if (cpufreq_register_driver(&powernow_k6_driver)) {
release_region (POWERNOW_IOPORT, 16);
- return -ENOMEM;
- }
- memset(driver, 0, sizeof(struct cpufreq_driver) +
- NR_CPUS * sizeof(struct cpufreq_policy));
- driver->policy = (struct cpufreq_policy *) (driver + 1);
-
- /* table init */
- for (i=0; (clock_ratio[i].frequency != CPUFREQ_TABLE_END); i++) {
- if (clock_ratio[i].index > max_multiplier)
- clock_ratio[i].frequency = CPUFREQ_ENTRY_INVALID;
- else
- clock_ratio[i].frequency = busfreq * clock_ratio[i].index;
- }
-
- driver->verify = &powernow_k6_verify;
- driver->setpolicy = &powernow_k6_setpolicy;
- strncpy(driver->name, "powernow-k6", CPUFREQ_NAME_LEN);
-
- /* cpuinfo and default policy values */
- driver->policy[0].cpu = 0;
- driver->policy[0].cpuinfo.transition_latency = CPUFREQ_ETERNAL;
- driver->policy[0].policy = CPUFREQ_POLICY_PERFORMANCE;
-#ifdef CONFIG_CPU_FREQ_24_API
- driver->cpu_cur_freq[0] = busfreq * max_multiplier;
-#endif
- result = cpufreq_frequency_table_cpuinfo(&driver->policy[0], &clock_ratio[0]);
- if (result) {
- kfree(driver);
- return result;
- }
-
- powernow_driver = driver;
-
- result = cpufreq_register(driver);
- if (result) {
- release_region (POWERNOW_IOPORT, 16);
- powernow_driver = NULL;
- kfree(driver);
+ return -EINVAL;
}
- return result;
+ return 0;
}
@@ -221,20 +229,14 @@
*/
static void __exit powernow_k6_exit(void)
{
- unsigned int i;
-
- if (powernow_driver) {
- for (i=0;i<8;i++)
- if (clock_ratio[i].index == max_multiplier)
- powernow_k6_set_state(i);
- cpufreq_unregister();
- kfree(powernow_driver);
- }
+ cpufreq_unregister_driver(&powernow_k6_driver);
+ release_region (POWERNOW_IOPORT, 16);
}
MODULE_AUTHOR ("Arjan van de Ven , Dave Jones , Dominik Brodowski ");
MODULE_DESCRIPTION ("PowerNow! driver for AMD K6-2+ / K6-3+ processors.");
MODULE_LICENSE ("GPL");
+
module_init(powernow_k6_init);
module_exit(powernow_k6_exit);
diff -Nru a/arch/i386/kernel/cpu/cpufreq/powernow-k7.c b/arch/i386/kernel/cpu/cpufreq/powernow-k7.c
--- a/arch/i386/kernel/cpu/cpufreq/powernow-k7.c Tue Mar 4 19:30:04 2003
+++ b/arch/i386/kernel/cpu/cpufreq/powernow-k7.c Tue Mar 4 19:30:04 2003
@@ -1,5 +1,5 @@
/*
- * $Id: powernow-k7.c,v 1.31 2003/02/12 21:16:35 davej Exp $
+ * $Id: powernow-k7.c,v 1.34 2003/02/22 10:23:46 db Exp $
*
* (C) 2003 Dave Jones
*
@@ -72,8 +72,6 @@
150, 225, 160, 165, 170, 180, -1, -1,
};
-static struct cpufreq_driver powernow_driver;
-
static struct cpufreq_frequency_table *powernow_table;
static unsigned int can_scale_bus;
@@ -369,13 +367,18 @@
policy->policy = CPUFREQ_POLICY_PERFORMANCE;
policy->cpuinfo.transition_latency = latency;
-#ifdef CONFIG_CPU_FREQ_24_API
- powernow_driver.cpu_cur_freq[policy->cpu] = maximum_speed;
-#endif
+ policy->cur = maximum_speed;
return cpufreq_frequency_table_cpuinfo(policy, powernow_table);
}
+static struct cpufreq_driver powernow_driver = {
+ .verify = powernow_verify,
+ .target = powernow_target,
+ .init = powernow_cpu_init,
+ .name = "powernow-k7",
+};
+
static int __init powernow_init (void)
{
if (check_powernow()==0)
@@ -390,14 +393,6 @@
if (powernow_table)
kfree(powernow_table);
}
-
-static struct cpufreq_driver powernow_driver = {
- .verify = powernow_verify,
- .target = powernow_target,
- .init = powernow_cpu_init,
- .name = "powernow-k7",
-};
-
MODULE_AUTHOR ("Dave Jones ");
MODULE_DESCRIPTION ("Powernow driver for AMD K7 processors.");
diff -Nru a/arch/i386/kernel/cpu/cpufreq/speedstep.c b/arch/i386/kernel/cpu/cpufreq/speedstep.c
--- a/arch/i386/kernel/cpu/cpufreq/speedstep.c Tue Mar 4 19:30:03 2003
+++ b/arch/i386/kernel/cpu/cpufreq/speedstep.c Tue Mar 4 19:30:03 2003
@@ -1,5 +1,5 @@
/*
- * $Id: speedstep.c,v 1.68 2003/01/20 17:31:47 db Exp $
+ * $Id: speedstep.c,v 1.70 2003/02/22 10:23:46 db Exp $
*
* (C) 2001 Dave Jones, Arjan van de ven.
* (C) 2002 - 2003 Dominik Brodowski
@@ -30,8 +30,6 @@
#include
-static struct cpufreq_driver speedstep_driver;
-
/* speedstep_chipset:
* It is necessary to know which chipset is used. As accesses to
* this device occur at various places in this module, we need a
@@ -629,9 +627,7 @@
policy->policy = (speed == speedstep_low_freq) ?
CPUFREQ_POLICY_POWERSAVE : CPUFREQ_POLICY_PERFORMANCE;
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
-#ifdef CONFIG_CPU_FREQ_24_API
- speedstep_driver.cpu_cur_freq[policy->cpu] = speed;
-#endif
+ policy->cur = speed;
return cpufreq_frequency_table_cpuinfo(policy, &speedstep_freqs[0]);
}
@@ -686,7 +682,7 @@
return -ENODEV;
}
- dprintk(KERN_INFO "cpufreq: Intel(R) SpeedStep(TM) support $Revision: 1.68 $\n");
+ dprintk(KERN_INFO "cpufreq: Intel(R) SpeedStep(TM) support $Revision: 1.70 $\n");
/* activate speedstep support */
if (speedstep_activate())
diff -Nru a/arch/i386/kernel/cpu/cyrix.c b/arch/i386/kernel/cpu/cyrix.c
--- a/arch/i386/kernel/cpu/cyrix.c Tue Mar 4 19:30:05 2003
+++ b/arch/i386/kernel/cpu/cyrix.c Tue Mar 4 19:30:05 2003
@@ -49,7 +49,7 @@
* Cx86_dir0_msb is a HACK needed by check_cx686_cpuid/slop in bugs.h in
* order to identify the Cyrix CPU model after we're out of setup.c
*
- * Actually since bugs.h doesnt even reference this perhaps someone should
+ * Actually since bugs.h doesn't even reference this perhaps someone should
* fix the documentation ???
*/
static unsigned char Cx86_dir0_msb __initdata = 0;
@@ -77,7 +77,7 @@
* BIOSes for compatibility with DOS games. This makes the udelay loop
* work correctly, and improves performance.
*
- * FIXME: our newer udelay uses the tsc. We dont need to frob with SLOP
+ * FIXME: our newer udelay uses the tsc. We don't need to frob with SLOP
*/
extern void calibrate_delay(void) __init;
diff -Nru a/arch/i386/kernel/cpu/intel.c b/arch/i386/kernel/cpu/intel.c
--- a/arch/i386/kernel/cpu/intel.c Tue Mar 4 19:30:05 2003
+++ b/arch/i386/kernel/cpu/intel.c Tue Mar 4 19:30:05 2003
@@ -151,7 +151,7 @@
#ifdef CONFIG_X86_F00F_BUG
/*
* All current models of Pentium and Pentium with MMX technology CPUs
- * have the F0 0F bug, which lets nonpriviledged users lock up the system.
+ * have the F0 0F bug, which lets nonprivileged users lock up the system.
* Note that the workaround only should be initialized once...
*/
c->f00f_bug = 0;
diff -Nru a/arch/i386/kernel/cpu/mcheck/non-fatal.c b/arch/i386/kernel/cpu/mcheck/non-fatal.c
--- a/arch/i386/kernel/cpu/mcheck/non-fatal.c Tue Mar 4 19:30:04 2003
+++ b/arch/i386/kernel/cpu/mcheck/non-fatal.c Tue Mar 4 19:30:04 2003
@@ -33,7 +33,7 @@
rdmsr (MSR_IA32_MC0_STATUS+i*4, low, high);
if (high & (1<<31)) {
- printk (KERN_EMERG "MCE: The hardware reports a non fatal, correctable incident occured on CPU %d.\n",
+ printk (KERN_EMERG "MCE: The hardware reports a non fatal, correctable incident occurred on CPU %d.\n",
smp_processor_id());
printk (KERN_EMERG "Bank %d: %08x%08x\n", i, high, low);
diff -Nru a/arch/i386/kernel/dmi_scan.c b/arch/i386/kernel/dmi_scan.c
--- a/arch/i386/kernel/dmi_scan.c Tue Mar 4 19:30:07 2003
+++ b/arch/i386/kernel/dmi_scan.c Tue Mar 4 19:30:07 2003
@@ -440,7 +440,7 @@
{
printk(KERN_INFO " *** Possibly defective BIOS detected (irqtable)\n");
printk(KERN_INFO " *** Many BIOSes matching this signature have incorrect IRQ routing tables.\n");
- printk(KERN_INFO " *** If you see IRQ problems, in paticular SCSI resets and hangs at boot\n");
+ printk(KERN_INFO " *** If you see IRQ problems, in particular SCSI resets and hangs at boot\n");
printk(KERN_INFO " *** contact your hardware vendor and ask about updates.\n");
printk(KERN_INFO " *** Building an SMP kernel may evade the bug some of the time.\n");
#ifdef CONFIG_X86_IO_APIC
@@ -455,7 +455,7 @@
static __init int broken_toshiba_keyboard(struct dmi_blacklist *d)
{
- printk(KERN_WARNING "Toshiba with broken keyboard detected. If your keyboard sometimes generates 3 keypresses instead of one, contact pavel@ucw.cz\n");
+ printk(KERN_WARNING "Toshiba with broken keyboard detected. If your keyboard sometimes generates 3 keypresses instead of one, see http://davyd.ucc.asn.au/projects/toshiba/README\n");
return 0;
}
@@ -470,6 +470,23 @@
return 0;
}
+#ifdef CONFIG_ACPI_SLEEP
+static __init int reset_videomode_after_s3(struct dmi_blacklist *d)
+{
+ /* See acpi_wakeup.S */
+ extern long acpi_video_flags;
+ acpi_video_flags |= 2;
+ return 0;
+}
+
+static __init int reset_videobios_after_s3(struct dmi_blacklist *d)
+{
+ extern long acpi_video_flags;
+ acpi_video_flags |= 1;
+ return 0;
+}
+#endif
+
/*
* Some Bioses enable the PS/2 mouse (touchpad) at resume, even if it was
* disabled before the suspend. Linux used to get terribly confused by that.
@@ -743,6 +760,12 @@
MATCH(DMI_PRODUCT_NAME, "S4030CDT/4.3"),
NO_MATCH, NO_MATCH, NO_MATCH
} },
+#ifdef CONFIG_ACPI_SLEEP
+ { reset_videomode_after_s3, "Toshiba Satellite 4030cdt", { /* Reset video mode after returning from ACPI S3 sleep */
+ MATCH(DMI_PRODUCT_NAME, "S4030CDT/4.3"),
+ NO_MATCH, NO_MATCH, NO_MATCH
+ } },
+#endif
{ print_if_true, KERN_WARNING "IBM T23 - BIOS 1.03b+ and controller firmware 1.02+ may be needed for Linux APM.", {
MATCH(DMI_SYS_VENDOR, "IBM"),
diff -Nru a/arch/i386/kernel/entry.S b/arch/i386/kernel/entry.S
--- a/arch/i386/kernel/entry.S Tue Mar 4 19:30:05 2003
+++ b/arch/i386/kernel/entry.S Tue Mar 4 19:30:05 2003
@@ -228,7 +228,6 @@
#define SYSENTER_RETURN 0xffffe010
# sysenter call handler stub
- ALIGN
ENTRY(sysenter_entry)
sti
pushl $(__USER_DS)
@@ -271,7 +270,6 @@
# system call handler stub
- ALIGN
ENTRY(system_call)
pushl %eax # save orig_eax
SAVE_ALL
diff -Nru a/arch/i386/kernel/i386_ksyms.c b/arch/i386/kernel/i386_ksyms.c
--- a/arch/i386/kernel/i386_ksyms.c Tue Mar 4 19:30:14 2003
+++ b/arch/i386/kernel/i386_ksyms.c Tue Mar 4 19:30:14 2003
@@ -68,7 +68,6 @@
EXPORT_SYMBOL(MCA_bus);
#ifdef CONFIG_DISCONTIGMEM
EXPORT_SYMBOL(node_data);
-EXPORT_SYMBOL(pfn_to_nid);
#endif
#ifdef CONFIG_X86_NUMAQ
EXPORT_SYMBOL(xquad_portio);
diff -Nru a/arch/i386/kernel/io_apic.c b/arch/i386/kernel/io_apic.c
--- a/arch/i386/kernel/io_apic.c Tue Mar 4 19:30:05 2003
+++ b/arch/i386/kernel/io_apic.c Tue Mar 4 19:30:05 2003
@@ -46,7 +46,7 @@
/*
* Is the SiS APIC rmw bug present ?
- * -1 = dont know, 0 = no, 1 = yes
+ * -1 = don't know, 0 = no, 1 = yes
*/
int sis_apic_bug = -1;
@@ -223,7 +223,7 @@
extern unsigned long irq_affinity [NR_IRQS];
int __cacheline_aligned pending_irq_balance_apicid [NR_IRQS];
-static int irqbalance_disabled __initdata = 0;
+static int irqbalance_disabled = NO_BALANCE_IRQ;
static int physical_balance = 0;
struct irq_cpu_info {
@@ -492,7 +492,7 @@
unsigned long allowed_mask;
unsigned int new_cpu;
- if (no_balance_irq)
+ if (irqbalance_disabled)
return;
allowed_mask = cpu_online_map & irq_affinity[irq];
@@ -1376,8 +1376,7 @@
void print_all_local_APICs (void)
{
- smp_call_function(print_local_APIC, NULL, 1, 1);
- print_local_APIC(NULL);
+ on_each_cpu(print_local_APIC, NULL, 1, 1);
}
void /*__init*/ print_PIC(void)
@@ -1843,8 +1842,7 @@
*/
printk(KERN_INFO "activating NMI Watchdog ...");
- smp_call_function(enable_NMI_through_LVT0, NULL, 1, 1);
- enable_NMI_through_LVT0(NULL);
+ on_each_cpu(enable_NMI_through_LVT0, NULL, 1, 1);
printk(" done.\n");
}
diff -Nru a/arch/i386/kernel/irq.c b/arch/i386/kernel/irq.c
--- a/arch/i386/kernel/irq.c Tue Mar 4 19:30:04 2003
+++ b/arch/i386/kernel/irq.c Tue Mar 4 19:30:04 2003
@@ -87,7 +87,7 @@
{
/*
* 'what should we do if we get a hw irq event on an illegal vector'.
- * each architecture has to answer this themselves, it doesnt deserve
+ * each architecture has to answer this themselves, it doesn't deserve
* a generic callback i think.
*/
#if CONFIG_X86
diff -Nru a/arch/i386/kernel/ldt.c b/arch/i386/kernel/ldt.c
--- a/arch/i386/kernel/ldt.c Tue Mar 4 19:30:11 2003
+++ b/arch/i386/kernel/ldt.c Tue Mar 4 19:30:11 2003
@@ -55,12 +55,14 @@
wmb();
if (reload) {
- load_LDT(pc);
#ifdef CONFIG_SMP
preempt_disable();
+ load_LDT(pc);
if (current->mm->cpu_vm_mask != (1 << smp_processor_id()))
smp_call_function(flush_ldt, 0, 1, 1);
preempt_enable();
+#else
+ load_LDT(pc);
#endif
}
if (oldsize) {
diff -Nru a/arch/i386/kernel/microcode.c b/arch/i386/kernel/microcode.c
--- a/arch/i386/kernel/microcode.c Tue Mar 4 19:30:05 2003
+++ b/arch/i386/kernel/microcode.c Tue Mar 4 19:30:05 2003
@@ -183,11 +183,10 @@
int i, error = 0, err;
struct microcode *m;
- if (smp_call_function(do_update_one, NULL, 1, 1) != 0) {
+ if (on_each_cpu(do_update_one, NULL, 1, 1) != 0) {
printk(KERN_ERR "microcode: IPI timeout, giving up\n");
return -EIO;
}
- do_update_one(NULL);
for (i=0; impc_cpuflag & CPU_ENABLED))
return;
- apicid = mpc_apic_id(m, translation_table[mpc_record]->trans_quad);
+ apicid = mpc_apic_id(m, translation_table[mpc_record]);
if (m->mpc_featureflag&(1<<0))
Dprintk(" Floating point unit present.\n");
@@ -631,7 +631,7 @@
else if (acpi_lapic)
printk(KERN_INFO "Using ACPI for processor (LAPIC) configuration information\n");
- printk("KERN_INFO Intel MultiProcessor Specification v1.%d\n", mpf->mpf_specification);
+ printk(KERN_INFO "Intel MultiProcessor Specification v1.%d\n", mpf->mpf_specification);
if (mpf->mpf_feature2 & (1<<7)) {
printk(KERN_INFO " IMCR and PIC compatibility mode.\n");
pic_mode = 1;
diff -Nru a/arch/i386/kernel/nmi.c b/arch/i386/kernel/nmi.c
--- a/arch/i386/kernel/nmi.c Tue Mar 4 19:30:09 2003
+++ b/arch/i386/kernel/nmi.c Tue Mar 4 19:30:09 2003
@@ -325,7 +325,7 @@
* as these watchdog NMI IRQs are generated on every CPU, we only
* have to check the current processor.
*
- * since NMIs dont listen to _any_ locks, we have to be extremely
+ * since NMIs don't listen to _any_ locks, we have to be extremely
* careful not to rely on unsafe variables. The printk might lock
* up though, so we have to break up any console locks first ...
* [when there will be more tty-related locks, break them up
diff -Nru a/arch/i386/kernel/numaq.c b/arch/i386/kernel/numaq.c
--- a/arch/i386/kernel/numaq.c Tue Mar 4 19:30:11 2003
+++ b/arch/i386/kernel/numaq.c Tue Mar 4 19:30:11 2003
@@ -27,6 +27,7 @@
#include
#include
#include
+#include
#include
/* These are needed before the pgdat's are created */
@@ -82,19 +83,7 @@
* physnode_map[8- ] = -1;
*/
int physnode_map[MAX_ELEMENTS] = { [0 ... (MAX_ELEMENTS - 1)] = -1};
-
-#define PFN_TO_ELEMENT(pfn) (pfn / PAGES_PER_ELEMENT)
-#define PA_TO_ELEMENT(pa) (PFN_TO_ELEMENT(pa >> PAGE_SHIFT))
-
-int pfn_to_nid(unsigned long pfn)
-{
- int nid = physnode_map[PFN_TO_ELEMENT(pfn)];
-
- if (nid == -1)
- BUG(); /* address is not present */
-
- return nid;
-}
+EXPORT_SYMBOL(physnode_map);
/*
* for each node mark the regions
diff -Nru a/arch/i386/kernel/setup.c b/arch/i386/kernel/setup.c
--- a/arch/i386/kernel/setup.c Tue Mar 4 19:30:08 2003
+++ b/arch/i386/kernel/setup.c Tue Mar 4 19:30:08 2003
@@ -14,6 +14,9 @@
* Moved CPU detection code to cpu/${cpu}.c
* Patrick Mochel , March 2002
*
+ * Provisions for empty E820 memory regions (reported by certain BIOSes).
+ * Alex Achenbach , December 2002.
+ *
*/
/*
@@ -279,7 +282,7 @@
int chgidx, still_changing;
int overlap_entries;
int new_bios_entry;
- int old_nr, new_nr;
+ int old_nr, new_nr, chg_nr;
int i;
/*
@@ -333,20 +336,24 @@
for (i=0; i < 2*old_nr; i++)
change_point[i] = &change_point_list[i];
- /* record all known change-points (starting and ending addresses) */
+ /* record all known change-points (starting and ending addresses),
+ omitting those that are for empty memory regions */
chgidx = 0;
for (i=0; i < old_nr; i++) {
- change_point[chgidx]->addr = biosmap[i].addr;
- change_point[chgidx++]->pbios = &biosmap[i];
- change_point[chgidx]->addr = biosmap[i].addr + biosmap[i].size;
- change_point[chgidx++]->pbios = &biosmap[i];
+ if (biosmap[i].size != 0) {
+ change_point[chgidx]->addr = biosmap[i].addr;
+ change_point[chgidx++]->pbios = &biosmap[i];
+ change_point[chgidx]->addr = biosmap[i].addr + biosmap[i].size;
+ change_point[chgidx++]->pbios = &biosmap[i];
+ }
}
+ chg_nr = chgidx; /* true number of change-points */
/* sort change-point list by memory addresses (low -> high) */
still_changing = 1;
while (still_changing) {
still_changing = 0;
- for (i=1; i < 2*old_nr; i++) {
+ for (i=1; i < chg_nr; i++) {
/* if > , swap */
/* or, if current= & last=, swap */
if ((change_point[i]->addr < change_point[i-1]->addr) ||
@@ -369,7 +376,7 @@
last_type = 0; /* start with undefined memory type */
last_addr = 0; /* start with 0 as last starting address */
/* loop through change-points, determining affect on the new bios map */
- for (chgidx=0; chgidx < 2*old_nr; chgidx++)
+ for (chgidx=0; chgidx < chg_nr; chgidx++)
{
/* keep track of all overlapping bios entries */
if (change_point[chgidx]->addr == change_point[chgidx]->pbios->addr)
@@ -545,6 +552,12 @@
if (*from == '@') {
start_at = memparse(from+1, &from);
add_memory_region(start_at, mem_size, E820_RAM);
+ } else if (*from == '#') {
+ start_at = memparse(from+1, &from);
+ add_memory_region(start_at, mem_size, E820_ACPI);
+ } else if (*from == '$') {
+ start_at = memparse(from+1, &from);
+ add_memory_region(start_at, mem_size, E820_RESERVED);
} else {
limit_regions(mem_size);
userdef=1;
@@ -818,7 +831,7 @@
request_resource(&iomem_resource, res);
if (e820.map[i].type == E820_RAM) {
/*
- * We dont't know which RAM region contains kernel data,
+ * We don't know which RAM region contains kernel data,
* so we try it repeatedly and let the resource manager
* test it.
*/
diff -Nru a/arch/i386/kernel/smp.c b/arch/i386/kernel/smp.c
--- a/arch/i386/kernel/smp.c Tue Mar 4 19:30:04 2003
+++ b/arch/i386/kernel/smp.c Tue Mar 4 19:30:04 2003
@@ -436,7 +436,7 @@
preempt_enable();
}
-static inline void do_flush_tlb_all_local(void)
+static void do_flush_tlb_all(void* info)
{
unsigned long cpu = smp_processor_id();
@@ -445,18 +445,9 @@
leave_mm(cpu);
}
-static void flush_tlb_all_ipi(void* info)
-{
- do_flush_tlb_all_local();
-}
-
void flush_tlb_all(void)
{
- preempt_disable();
- smp_call_function (flush_tlb_all_ipi,0,1,1);
-
- do_flush_tlb_all_local();
- preempt_enable();
+ on_each_cpu(do_flush_tlb_all, 0, 1, 1);
}
/*
diff -Nru a/arch/i386/kernel/smpboot.c b/arch/i386/kernel/smpboot.c
--- a/arch/i386/kernel/smpboot.c Tue Mar 4 19:30:08 2003
+++ b/arch/i386/kernel/smpboot.c Tue Mar 4 19:30:08 2003
@@ -170,7 +170,7 @@
/*
* TSC synchronization.
*
- * We first check wether all CPUs have their TSC's synchronized,
+ * We first check whether all CPUs have their TSC's synchronized,
* then we print a warning if not, and always resync.
*/
@@ -956,7 +956,7 @@
smp_tune_scheduling();
/*
- * If we couldnt find an SMP configuration at boot time,
+ * If we couldn't find an SMP configuration at boot time,
* get out of here now!
*/
if (!smp_found_config) {
diff -Nru a/arch/i386/kernel/srat.c b/arch/i386/kernel/srat.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/arch/i386/kernel/srat.c Tue Mar 4 19:30:14 2003
@@ -0,0 +1,448 @@
+/*
+ * Some of the code in this file has been gleaned from the 64 bit
+ * discontigmem support code base.
+ *
+ * Copyright (C) 2002, IBM Corp.
+ *
+ * All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ * NON INFRINGEMENT. See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ * Send feedback to Pat Gaughen
+ */
+#include
+#include
+#include
+#include
+#include
+#include
+
+/*
+ * proximity macros and definitions
+ */
+#define NODE_ARRAY_INDEX(x) ((x) / 8) /* 8 bits/char */
+#define NODE_ARRAY_OFFSET(x) ((x) % 8) /* 8 bits/char */
+#define BMAP_SET(bmap, bit) ((bmap)[NODE_ARRAY_INDEX(bit)] |= 1 << NODE_ARRAY_OFFSET(bit))
+#define BMAP_TEST(bmap, bit) ((bmap)[NODE_ARRAY_INDEX(bit)] & (1 << NODE_ARRAY_OFFSET(bit)))
+#define MAX_PXM_DOMAINS 256 /* 1 byte and no promises about values */
+/* bitmap length; _PXM is at most 255 */
+#define PXM_BITMAP_LEN (MAX_PXM_DOMAINS / 8)
+static u8 pxm_bitmap[PXM_BITMAP_LEN]; /* bitmap of proximity domains */
+
+#define MAX_CHUNKS_PER_NODE 4
+#define MAXCHUNKS (MAX_CHUNKS_PER_NODE * MAX_NUMNODES)
+struct node_memory_chunk_s {
+ unsigned long start_pfn;
+ unsigned long end_pfn;
+ u8 pxm; // proximity domain of node
+ u8 nid; // which cnode contains this chunk?
+ u8 bank; // which mem bank on this node
+};
+static struct node_memory_chunk_s node_memory_chunk[MAXCHUNKS];
+
+static int num_memory_chunks; /* total number of memory chunks */
+static int zholes_size_init;
+static unsigned long zholes_size[MAX_NUMNODES * MAX_NR_ZONES];
+
+unsigned long node_start_pfn[MAX_NUMNODES];
+unsigned long node_end_pfn[MAX_NUMNODES];
+
+extern void * boot_ioremap(unsigned long, unsigned long);
+
+/* Identify CPU proximity domains */
+static void __init parse_cpu_affinity_structure(char *p)
+{
+ struct acpi_table_processor_affinity *cpu_affinity =
+ (struct acpi_table_processor_affinity *) p;
+
+ if (!cpu_affinity->flags.enabled)
+ return; /* empty entry */
+
+ /* mark this node as "seen" in node bitmap */
+ BMAP_SET(pxm_bitmap, cpu_affinity->proximity_domain);
+
+ printk("CPU 0x%02X in proximity domain 0x%02X\n",
+ cpu_affinity->apic_id, cpu_affinity->proximity_domain);
+}
+
+/*
+ * Identify memory proximity domains and hot-remove capabilities.
+ * Fill node memory chunk list structure.
+ */
+static void __init parse_memory_affinity_structure (char *sratp)
+{
+ unsigned long long paddr, size;
+ unsigned long start_pfn, end_pfn;
+ u8 pxm;
+ struct node_memory_chunk_s *p, *q, *pend;
+ struct acpi_table_memory_affinity *memory_affinity =
+ (struct acpi_table_memory_affinity *) sratp;
+
+ if (!memory_affinity->flags.enabled)
+ return; /* empty entry */
+
+ /* mark this node as "seen" in node bitmap */
+ BMAP_SET(pxm_bitmap, memory_affinity->proximity_domain);
+
+ /* calculate info for memory chunk structure */
+ paddr = memory_affinity->base_addr_hi;
+ paddr = (paddr << 32) | memory_affinity->base_addr_lo;
+ size = memory_affinity->length_hi;
+ size = (size << 32) | memory_affinity->length_lo;
+
+ start_pfn = paddr >> PAGE_SHIFT;
+ end_pfn = (paddr + size) >> PAGE_SHIFT;
+
+ pxm = memory_affinity->proximity_domain;
+
+ if (num_memory_chunks >= MAXCHUNKS) {
+ printk("Too many mem chunks in SRAT. Ignoring %lld MBytes at %llx\n",
+ size/(1024*1024), paddr);
+ return;
+ }
+
+ /* Insertion sort based on base address */
+ pend = &node_memory_chunk[num_memory_chunks];
+ for (p = &node_memory_chunk[0]; p < pend; p++) {
+ if (start_pfn < p->start_pfn)
+ break;
+ }
+ if (p < pend) {
+ for (q = pend; q >= p; q--)
+ *(q + 1) = *q;
+ }
+ p->start_pfn = start_pfn;
+ p->end_pfn = end_pfn;
+ p->pxm = pxm;
+
+ num_memory_chunks++;
+
+ printk("Memory range 0x%lX to 0x%lX (type 0x%X) in proximity domain 0x%02X %s\n",
+ start_pfn, end_pfn,
+ memory_affinity->memory_type,
+ memory_affinity->proximity_domain,
+ (memory_affinity->flags.hot_pluggable ?
+ "enabled and removable" : "enabled" ) );
+}
+
+#if MAX_NR_ZONES != 3
+#error "MAX_NR_ZONES != 3, chunk_to_zone requires review"
+#endif
+/* Take a chunk of pages from page frame cstart to cend and count the number
+ * of pages in each zone, returned via zones[].
+ */
+static __init void chunk_to_zones(unsigned long cstart, unsigned long cend,
+ unsigned long *zones)
+{
+ unsigned long max_dma;
+ extern unsigned long max_low_pfn;
+
+ int z;
+ unsigned long rend;
+
+ /* FIXME: MAX_DMA_ADDRESS and max_low_pfn are trying to provide
+ * similarly scoped information and should be handled in a consistant
+ * manner.
+ */
+ max_dma = virt_to_phys((char *)MAX_DMA_ADDRESS) >> PAGE_SHIFT;
+
+ /* Split the hole into the zones in which it falls. Repeatedly
+ * take the segment in which the remaining hole starts, round it
+ * to the end of that zone.
+ */
+ memset(zones, 0, MAX_NR_ZONES * sizeof(long));
+ while (cstart < cend) {
+ if (cstart < max_dma) {
+ z = ZONE_DMA;
+ rend = (cend < max_dma)? cend : max_dma;
+
+ } else if (cstart < max_low_pfn) {
+ z = ZONE_NORMAL;
+ rend = (cend < max_low_pfn)? cend : max_low_pfn;
+
+ } else {
+ z = ZONE_HIGHMEM;
+ rend = cend;
+ }
+ zones[z] += rend - cstart;
+ cstart = rend;
+ }
+}
+
+/*
+ * physnode_map keeps track of the physical memory layout of the
+ * numaq nodes on a 256Mb break (each element of the array will
+ * represent 256Mb of memory and will be marked by the node id. so,
+ * if the first gig is on node 0, and the second gig is on node 1
+ * physnode_map will contain:
+ * physnode_map[0-3] = 0;
+ * physnode_map[4-7] = 1;
+ * physnode_map[8- ] = -1;
+ */
+int pfnnode_map[MAX_ELEMENTS] = { [0 ... (MAX_ELEMENTS - 1)] = -1};
+EXPORT_SYMBOL(pfnnode_map);
+
+static void __init initialize_pfnnode_map(void)
+{
+ unsigned long topofchunk, cur = 0;
+ int i;
+
+ for (i = 0; i < num_memory_chunks; i++) {
+ cur = node_memory_chunk[i].start_pfn;
+ topofchunk = node_memory_chunk[i].end_pfn;
+ while (cur < topofchunk) {
+ pfnnode_map[PFN_TO_ELEMENT(cur)] = node_memory_chunk[i].nid;
+ cur ++;
+ }
+ }
+}
+
+/* Parse the ACPI Static Resource Affinity Table */
+static int __init acpi20_parse_srat(struct acpi_table_srat *sratp)
+{
+ u8 *start, *end, *p;
+ int i, j, nid;
+ u8 pxm_to_nid_map[MAX_PXM_DOMAINS];/* _PXM to logical node ID map */
+ u8 nid_to_pxm_map[MAX_NUMNODES];/* logical node ID to _PXM map */
+
+ start = (u8 *)(&(sratp->reserved) + 1); /* skip header */
+ p = start;
+ end = (u8 *)sratp + sratp->header.length;
+
+ memset(pxm_bitmap, 0, sizeof(pxm_bitmap)); /* init proximity domain bitmap */
+ memset(node_memory_chunk, 0, sizeof(node_memory_chunk));
+ memset(zholes_size, 0, sizeof(zholes_size));
+
+ /* -1 in these maps means not available */
+ memset(pxm_to_nid_map, -1, sizeof(pxm_to_nid_map));
+ memset(nid_to_pxm_map, -1, sizeof(nid_to_pxm_map));
+
+ num_memory_chunks = 0;
+ while (p < end) {
+ switch (*p) {
+ case ACPI_SRAT_PROCESSOR_AFFINITY:
+ parse_cpu_affinity_structure(p);
+ break;
+ case ACPI_SRAT_MEMORY_AFFINITY:
+ parse_memory_affinity_structure(p);
+ break;
+ default:
+ printk("ACPI 2.0 SRAT: unknown entry skipped: type=0x%02X, len=%d\n", p[0], p[1]);
+ break;
+ }
+ p += p[1];
+ if (p[1] == 0) {
+ printk("acpi20_parse_srat: Entry length value is zero;"
+ " can't parse any further!\n");
+ break;
+ }
+ }
+
+ /* Calculate total number of nodes in system from PXM bitmap and create
+ * a set of sequential node IDs starting at zero. (ACPI doesn't seem
+ * to specify the range of _PXM values.)
+ */
+ numnodes = 0; /* init total nodes in system */
+ for (i = 0; i < MAX_PXM_DOMAINS; i++) {
+ if (BMAP_TEST(pxm_bitmap, i)) {
+ pxm_to_nid_map[i] = numnodes;
+ nid_to_pxm_map[numnodes] = i;
+ node_set_online(numnodes);
+ ++numnodes;
+ }
+ }
+
+ if (numnodes == 0)
+ BUG();
+
+ /* set cnode id in memory chunk structure */
+ for (i = 0; i < num_memory_chunks; i++)
+ node_memory_chunk[i].nid = pxm_to_nid_map[node_memory_chunk[i].pxm];
+
+ initialize_pfnnode_map();
+
+ printk("pxm bitmap: ");
+ for (i = 0; i < sizeof(pxm_bitmap); i++) {
+ printk("%02X ", pxm_bitmap[i]);
+ }
+ printk("\n");
+ printk("Number of logical nodes in system = %d\n", numnodes);
+ printk("Number of memory chunks in system = %d\n", num_memory_chunks);
+
+ for (j = 0; j < num_memory_chunks; j++){
+ printk("chunk %d nid %d start_pfn %08lx end_pfn %08lx\n",
+ j, node_memory_chunk[j].nid,
+ node_memory_chunk[j].start_pfn,
+ node_memory_chunk[j].end_pfn);
+ }
+
+ /*calculate node_start_pfn/node_end_pfn arrays*/
+ for (nid = 0; nid < numnodes; nid++) {
+ int been_here_before = 0;
+
+ for (j = 0; j < num_memory_chunks; j++){
+ if (node_memory_chunk[j].nid == nid) {
+ if (been_here_before == 0) {
+ node_start_pfn[nid] = node_memory_chunk[j].start_pfn;
+ node_end_pfn[nid] = node_memory_chunk[j].end_pfn;
+ been_here_before = 1;
+ } else { /* We've found another chunk of memory for the node */
+ if (node_start_pfn[nid] < node_memory_chunk[j].start_pfn) {
+ node_end_pfn[nid] = node_memory_chunk[j].end_pfn;
+ }
+ }
+ }
+ }
+ }
+ return 0;
+}
+
+void __init get_memcfg_from_srat(void)
+{
+ struct acpi_table_header *header = NULL;
+ struct acpi_table_rsdp *rsdp = NULL;
+ struct acpi_table_rsdt *rsdt = NULL;
+ struct acpi_pointer *rsdp_address = NULL;
+ struct acpi_table_rsdt saved_rsdt;
+ int tables = 0;
+ int i = 0;
+
+ acpi_find_root_pointer(ACPI_PHYSICAL_ADDRESSING, rsdp_address);
+
+ if (rsdp_address->pointer_type == ACPI_PHYSICAL_POINTER) {
+ printk("%s: assigning address to rsdp\n", __FUNCTION__);
+ rsdp = (struct acpi_table_rsdp *)rsdp_address->pointer.physical;
+ } else {
+ printk("%s: rsdp_address is not a physical pointer\n", __FUNCTION__);
+ return;
+ }
+ if (!rsdp) {
+ printk("%s: Didn't find ACPI root!\n", __FUNCTION__);
+ return;
+ }
+
+ printk(KERN_INFO "%.8s v%d [%.6s]\n", rsdp->signature, rsdp->revision,
+ rsdp->oem_id);
+
+ if (strncmp(rsdp->signature, RSDP_SIG,strlen(RSDP_SIG))) {
+ printk(KERN_WARNING "%s: RSDP table signature incorrect\n", __FUNCTION__);
+ return;
+ }
+
+ rsdt = (struct acpi_table_rsdt *)
+ boot_ioremap(rsdp->rsdt_address, sizeof(struct acpi_table_rsdt));
+
+ if (!rsdt) {
+ printk(KERN_WARNING
+ "%s: ACPI: Invalid root system description tables (RSDT)\n",
+ __FUNCTION__);
+ return;
+ }
+
+ header = & rsdt->header;
+
+ if (strncmp(header->signature, RSDT_SIG, strlen(RSDT_SIG))) {
+ printk(KERN_WARNING "ACPI: RSDT signature incorrect\n");
+ return;
+ }
+
+ /*
+ * The number of tables is computed by taking the
+ * size of all entries (header size minus total
+ * size of RSDT) divided by the size of each entry
+ * (4-byte table pointers).
+ */
+ tables = (header->length - sizeof(struct acpi_table_header)) / 4;
+
+ memcpy(&saved_rsdt, rsdt, sizeof(saved_rsdt));
+
+ if (saved_rsdt.header.length > sizeof(saved_rsdt)) {
+ printk(KERN_WARNING "ACPI: Too big length in RSDT: %d\n",
+ saved_rsdt.header.length);
+ return;
+ }
+
+printk("Begin table scan....\n");
+
+ for (i = 0; i < tables; i++) {
+ /* Map in header, then map in full table length. */
+ header = (struct acpi_table_header *)
+ boot_ioremap(saved_rsdt.entry[i], sizeof(struct acpi_table_header));
+ if (!header)
+ break;
+ header = (struct acpi_table_header *)
+ boot_ioremap(saved_rsdt.entry[i], header->length);
+ if (!header)
+ break;
+
+ if (strncmp((char *) &header->signature, "SRAT", 4))
+ continue;
+ acpi20_parse_srat((struct acpi_table_srat *)header);
+ /* we've found the srat table. don't need to look at any more tables */
+ break;
+ }
+}
+
+/* For each node run the memory list to determine whether there are
+ * any memory holes. For each hole determine which ZONE they fall
+ * into.
+ *
+ * NOTE#1: this requires knowledge of the zone boundries and so
+ * _cannot_ be performed before those are calculated in setup_memory.
+ *
+ * NOTE#2: we rely on the fact that the memory chunks are ordered by
+ * start pfn number during setup.
+ */
+static void __init get_zholes_init(void)
+{
+ int nid;
+ int c;
+ int first;
+ unsigned long end = 0;
+
+ for (nid = 0; nid < numnodes; nid++) {
+ first = 1;
+ for (c = 0; c < num_memory_chunks; c++){
+ if (node_memory_chunk[c].nid == nid) {
+ if (first) {
+ end = node_memory_chunk[c].end_pfn;
+ first = 0;
+
+ } else {
+ /* Record any gap between this chunk
+ * and the previous chunk on this node
+ * against the zones it spans.
+ */
+ chunk_to_zones(end,
+ node_memory_chunk[c].start_pfn,
+ &zholes_size[nid * MAX_NR_ZONES]);
+ }
+ }
+ }
+ }
+}
+
+unsigned long * __init get_zholes_size(int nid)
+{
+ if (!zholes_size_init) {
+ zholes_size_init++;
+ get_zholes_init();
+ }
+ if((nid >= numnodes) | (nid >= MAX_NUMNODES))
+ printk("%s: nid = %d is invalid. numnodes = %d",
+ __FUNCTION__, nid, numnodes);
+ return &zholes_size[nid * MAX_NR_ZONES];
+}
diff -Nru a/arch/i386/kernel/suspend.c b/arch/i386/kernel/suspend.c
--- a/arch/i386/kernel/suspend.c Tue Mar 4 19:30:12 2003
+++ b/arch/i386/kernel/suspend.c Tue Mar 4 19:30:12 2003
@@ -113,7 +113,7 @@
int cpu = smp_processor_id();
struct tss_struct * t = init_tss + cpu;
- set_tss_desc(cpu,t); /* This just modifies memory; should not be neccessary. But... This is neccessary, because 386 hardware has concept of busy TSS or some similar stupidity. */
+ set_tss_desc(cpu,t); /* This just modifies memory; should not be necessary. But... This is necessary, because 386 hardware has concept of busy TSS or some similar stupidity. */
cpu_gdt_table[cpu][GDT_ENTRY_TSS].b &= 0xfffffdff;
load_TR_desc(); /* This does ltr */
diff -Nru a/arch/i386/kernel/sysenter.c b/arch/i386/kernel/sysenter.c
--- a/arch/i386/kernel/sysenter.c Tue Mar 4 19:30:14 2003
+++ b/arch/i386/kernel/sysenter.c Tue Mar 4 19:30:14 2003
@@ -95,8 +95,7 @@
return 0;
memcpy((void *) page, sysent, sizeof(sysent));
- enable_sep_cpu(NULL);
- smp_call_function(enable_sep_cpu, NULL, 1, 1);
+ on_each_cpu(enable_sep_cpu, NULL, 1, 1);
return 0;
}
diff -Nru a/arch/i386/kernel/time.c b/arch/i386/kernel/time.c
--- a/arch/i386/kernel/time.c Tue Mar 4 19:30:08 2003
+++ b/arch/i386/kernel/time.c Tue Mar 4 19:30:08 2003
@@ -66,7 +66,7 @@
#include "do_timer.h"
-u64 jiffies_64;
+u64 jiffies_64 = INITIAL_JIFFIES;
unsigned long cpu_khz; /* Detected as we calibrate the TSC */
diff -Nru a/arch/i386/kernel/timers/timer_tsc.c b/arch/i386/kernel/timers/timer_tsc.c
--- a/arch/i386/kernel/timers/timer_tsc.c Tue Mar 4 19:30:11 2003
+++ b/arch/i386/kernel/timers/timer_tsc.c Tue Mar 4 19:30:11 2003
@@ -264,7 +264,7 @@
* the ident/bugs checks so we must run this hook as it
* may turn off the TSC flag.
*
- * NOTE: this doesnt yet handle SMP 486 machines where only
+ * NOTE: this doesn't yet handle SMP 486 machines where only
* some CPU's have a TSC. Thats never worked and nobody has
* moaned if you have the only one in the world - you fix it!
*/
@@ -299,6 +299,7 @@
return -ENODEV;
}
+#ifndef CONFIG_X86_TSC
/* disable flag for tsc. Takes effect by clearing the TSC cpu flag
* in cpu/common.c */
static int __init tsc_setup(char *str)
@@ -306,7 +307,14 @@
tsc_disable = 1;
return 1;
}
-
+#else
+static int __init tsc_setup(char *str)
+{
+ printk(KERN_WARNING "notsc: Kernel compiled with CONFIG_X86_TSC, "
+ "cannot disable TSC.\n");
+ return 1;
+}
+#endif
__setup("notsc", tsc_setup);
diff -Nru a/arch/i386/lib/mmx.c b/arch/i386/lib/mmx.c
--- a/arch/i386/lib/mmx.c Tue Mar 4 19:30:09 2003
+++ b/arch/i386/lib/mmx.c Tue Mar 4 19:30:09 2003
@@ -15,7 +15,7 @@
* (reported so on K6-III)
* We should use a better code neutral filler for the short jump
* leal ebx. [ebx] is apparently best for K6-2, but Cyrix ??
- * We also want to clobber the filler register so we dont get any
+ * We also want to clobber the filler register so we don't get any
* register forwarding stalls on the filler.
*
* Add *user handling. Checksums are not a win with MMX on any CPU
diff -Nru a/arch/i386/mach-visws/visws_apic.c b/arch/i386/mach-visws/visws_apic.c
--- a/arch/i386/mach-visws/visws_apic.c Tue Mar 4 19:30:04 2003
+++ b/arch/i386/mach-visws/visws_apic.c Tue Mar 4 19:30:04 2003
@@ -190,7 +190,7 @@
* the 'master' interrupt source: CO_IRQ_8259.
*
* When the 8259 interrupts its handler figures out which of these
- * devices is interrupting and dispatches to it's handler.
+ * devices is interrupting and dispatches to its handler.
*
* CAREFUL: devices see the 'virtual' interrupt only. Thus disable/
* enable_irq gets the right irq. This 'master' irq is never directly
diff -Nru a/arch/i386/mach-voyager/voyager_smp.c b/arch/i386/mach-voyager/voyager_smp.c
--- a/arch/i386/mach-voyager/voyager_smp.c Tue Mar 4 19:30:07 2003
+++ b/arch/i386/mach-voyager/voyager_smp.c Tue Mar 4 19:30:07 2003
@@ -1209,8 +1209,8 @@
smp_call_function_interrupt();
}
-static inline void
-do_flush_tlb_all_local(void)
+static void
+do_flush_tlb_all(void* info)
{
unsigned long cpu = smp_processor_id();
@@ -1220,19 +1220,11 @@
}
-static void
-flush_tlb_all_function(void* info)
-{
- do_flush_tlb_all_local();
-}
-
/* flush the TLB of every active CPU in the system */
void
flush_tlb_all(void)
{
- smp_call_function (flush_tlb_all_function, 0, 1, 1);
-
- do_flush_tlb_all_local();
+ on_each_cpu(do_flush_tlb_all, 0, 1, 1);
}
/* used to set up the trampoline for other CPUs when the memory manager
@@ -1453,7 +1445,7 @@
}
/* send a CPI at level cpi to a set of cpus in cpuset (set 1 bit per
- * processor to recieve CPI */
+ * processor to receive CPI */
static void
send_CPI(__u32 cpuset, __u8 cpi)
{
@@ -1481,7 +1473,7 @@
outb((__u8)cpuset, VIC_CPI_Registers[VIC_CPI_LEVEL0]);
}
-/* Acknowlege receipt of CPI in the QIC, clear in QIC hardware and
+/* Acknowledge receipt of CPI in the QIC, clear in QIC hardware and
* set the cache line to shared by reading it.
*
* DON'T make this inline otherwise the cache line read will be
diff -Nru a/arch/i386/mm/Makefile b/arch/i386/mm/Makefile
--- a/arch/i386/mm/Makefile Tue Mar 4 19:30:05 2003
+++ b/arch/i386/mm/Makefile Tue Mar 4 19:30:05 2003
@@ -2,8 +2,9 @@
# Makefile for the linux i386-specific parts of the memory manager.
#
-obj-y := init.o pgtable.o fault.o ioremap.o extable.o pageattr.o
+obj-y := init.o pgtable.o fault.o ioremap.o extable.o pageattr.o
obj-$(CONFIG_DISCONTIGMEM) += discontig.o
obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
obj-$(CONFIG_HIGHMEM) += highmem.o
+obj-$(CONFIG_BOOT_IOREMAP) += boot_ioremap.o
diff -Nru a/arch/i386/mm/boot_ioremap.c b/arch/i386/mm/boot_ioremap.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/arch/i386/mm/boot_ioremap.c Tue Mar 4 19:30:14 2003
@@ -0,0 +1,94 @@
+/*
+ * arch/i386/mm/boot_ioremap.c
+ *
+ * Re-map functions for early boot-time before paging_init() when the
+ * boot-time pagetables are still in use
+ *
+ * Written by Dave Hansen
+ */
+
+
+/*
+ * We need to use the 2-level pagetable functions, but CONFIG_X86_PAE
+ * keeps that from happenning. If anyone has a better way, I'm listening.
+ *
+ * boot_pte_t is defined only if this all works correctly
+ */
+
+#include
+#undef CONFIG_X86_PAE
+#include
+#include
+#include
+#include
+
+/*
+ * I'm cheating here. It is known that the two boot PTE pages are
+ * allocated next to each other. I'm pretending that they're just
+ * one big array.
+ */
+
+#define BOOT_PTE_PTRS (PTRS_PER_PTE*2)
+#define boot_pte_index(address) \
+ (((address) >> PAGE_SHIFT) & (BOOT_PTE_PTRS - 1))
+
+static inline boot_pte_t* boot_vaddr_to_pte(void *address)
+{
+ boot_pte_t* boot_pg = (boot_pte_t*)pg0;
+ return &boot_pg[boot_pte_index((unsigned long)address)];
+}
+
+/*
+ * This is only for a caller who is clever enough to page-align
+ * phys_addr and virtual_source, and who also has a preference
+ * about which virtual address from which to steal ptes
+ */
+static void __boot_ioremap(unsigned long phys_addr, unsigned long nrpages,
+ void* virtual_source)
+{
+ boot_pte_t* pte;
+ int i;
+
+ pte = boot_vaddr_to_pte(virtual_source);
+ for (i=0; i < nrpages; i++, phys_addr += PAGE_SIZE, pte++) {
+ set_pte(pte, pfn_pte(phys_addr>>PAGE_SHIFT, PAGE_KERNEL));
+ }
+}
+
+/* the virtual space we're going to remap comes from this array */
+#define BOOT_IOREMAP_PAGES 4
+#define BOOT_IOREMAP_SIZE (BOOT_IOREMAP_PAGES*PAGE_SIZE)
+__initdata char boot_ioremap_space[BOOT_IOREMAP_SIZE]
+ __attribute__ ((aligned (PAGE_SIZE)));
+
+/*
+ * This only applies to things which need to ioremap before paging_init()
+ * bt_ioremap() and plain ioremap() are both useless at this point.
+ *
+ * When used, we're still using the boot-time pagetables, which only
+ * have 2 PTE pages mapping the first 8MB
+ *
+ * There is no unmap. The boot-time PTE pages aren't used after boot.
+ * If you really want the space back, just remap it yourself.
+ * boot_ioremap(&ioremap_space-PAGE_OFFSET, BOOT_IOREMAP_SIZE)
+ */
+__init void* boot_ioremap(unsigned long phys_addr, unsigned long size)
+{
+ unsigned long last_addr, offset;
+ unsigned int nrpages;
+
+ last_addr = phys_addr + size - 1;
+
+ /* page align the requested address */
+ offset = phys_addr & ~PAGE_MASK;
+ phys_addr &= PAGE_MASK;
+ size = PAGE_ALIGN(last_addr) - phys_addr;
+
+ nrpages = size >> PAGE_SHIFT;
+ if (nrpages > BOOT_IOREMAP_PAGES)
+ return NULL;
+
+ __boot_ioremap(phys_addr, nrpages, boot_ioremap_space);
+
+ return &boot_ioremap_space[offset];
+}
diff -Nru a/arch/i386/mm/discontig.c b/arch/i386/mm/discontig.c
--- a/arch/i386/mm/discontig.c Tue Mar 4 19:30:09 2003
+++ b/arch/i386/mm/discontig.c Tue Mar 4 19:30:09 2003
@@ -48,6 +48,14 @@
extern unsigned long totalram_pages;
extern unsigned long totalhigh_pages;
+#define LARGE_PAGE_BYTES (PTRS_PER_PTE * PAGE_SIZE)
+
+unsigned long node_remap_start_pfn[MAX_NUMNODES];
+unsigned long node_remap_size[MAX_NUMNODES];
+unsigned long node_remap_offset[MAX_NUMNODES];
+void *node_remap_start_vaddr[MAX_NUMNODES];
+void set_pmd_pfn(unsigned long vaddr, unsigned long pfn, pgprot_t flags);
+
/*
* Find the highest page frame number we have available for the node
*/
@@ -65,12 +73,13 @@
*/
static void __init allocate_pgdat(int nid)
{
- unsigned long node_datasz;
-
- node_datasz = PFN_UP(sizeof(struct pglist_data));
- NODE_DATA(nid) = (pg_data_t *)(__va(min_low_pfn << PAGE_SHIFT));
- min_low_pfn += node_datasz;
- memset(NODE_DATA(nid), 0, sizeof(struct pglist_data));
+ if (nid)
+ NODE_DATA(nid) = (pg_data_t *)node_remap_start_vaddr[nid];
+ else {
+ NODE_DATA(nid) = (pg_data_t *)(__va(min_low_pfn << PAGE_SHIFT));
+ min_low_pfn += PFN_UP(sizeof(pg_data_t));
+ memset(NODE_DATA(nid), 0, sizeof(pg_data_t));
+ }
}
/*
@@ -113,14 +122,6 @@
}
}
-#define LARGE_PAGE_BYTES (PTRS_PER_PTE * PAGE_SIZE)
-
-unsigned long node_remap_start_pfn[MAX_NUMNODES];
-unsigned long node_remap_size[MAX_NUMNODES];
-unsigned long node_remap_offset[MAX_NUMNODES];
-void *node_remap_start_vaddr[MAX_NUMNODES];
-extern void set_pmd_pfn(unsigned long vaddr, unsigned long pfn, pgprot_t flags);
-
void __init remap_numa_kva(void)
{
void *vaddr;
@@ -145,7 +146,7 @@
for (nid = 1; nid < numnodes; nid++) {
/* calculate the size of the mem_map needed in bytes */
size = (node_end_pfn[nid] - node_start_pfn[nid] + 1)
- * sizeof(struct page);
+ * sizeof(struct page) + sizeof(pg_data_t);
/* convert size to large (pmd size) pages, rounding up */
size = (size + LARGE_PAGE_BYTES - 1) / LARGE_PAGE_BYTES;
/* now the roundup is correct, convert to PAGE_SIZE pages */
@@ -195,9 +196,9 @@
printk("Low memory ends at vaddr %08lx\n",
(ulong) pfn_to_kaddr(max_low_pfn));
for (nid = 0; nid < numnodes; nid++) {
- allocate_pgdat(nid);
node_remap_start_vaddr[nid] = pfn_to_kaddr(
highstart_pfn - node_remap_offset[nid]);
+ allocate_pgdat(nid);
printk ("node %d will remap to vaddr %08lx - %08lx\n", nid,
(ulong) node_remap_start_vaddr[nid],
(ulong) pfn_to_kaddr(highstart_pfn
@@ -251,13 +252,6 @@
*/
find_smp_config();
- /*insert other nodes into pgdat_list*/
- for (nid = 1; nid < numnodes; nid++){
- NODE_DATA(nid)->pgdat_next = pgdat_list;
- pgdat_list = NODE_DATA(nid);
- }
-
-
#ifdef CONFIG_BLK_DEV_INITRD
if (LOADER_TYPE && INITRD_START) {
if (INITRD_START + INITRD_SIZE <= (system_max_low_pfn << PAGE_SHIFT)) {
@@ -282,8 +276,21 @@
{
int nid;
+ /*
+ * Insert nodes into pgdat_list backward so they appear in order.
+ * Clobber node 0's links and NULL out pgdat_list before starting.
+ */
+ pgdat_list = NULL;
+ for (nid = numnodes - 1; nid >= 0; nid--) {
+ if (nid)
+ memset(NODE_DATA(nid), 0, sizeof(pg_data_t));
+ NODE_DATA(nid)->pgdat_next = pgdat_list;
+ pgdat_list = NODE_DATA(nid);
+ }
+
for (nid = 0; nid < numnodes; nid++) {
unsigned long zones_size[MAX_NR_ZONES] = {0, 0, 0};
+ unsigned long *zholes_size;
unsigned int max_dma;
unsigned long low = max_low_pfn;
@@ -307,18 +314,24 @@
#endif
}
}
+ zholes_size = get_zholes_size(nid);
/*
* We let the lmem_map for node 0 be allocated from the
* normal bootmem allocator, but other nodes come from the
* remapped KVA area - mbligh
*/
- if (nid)
- free_area_init_node(nid, NODE_DATA(nid),
- node_remap_start_vaddr[nid], zones_size,
- start, 0);
- else
+ if (!nid)
free_area_init_node(nid, NODE_DATA(nid), 0,
- zones_size, start, 0);
+ zones_size, start, zholes_size);
+ else {
+ unsigned long lmem_map;
+ lmem_map = (unsigned long)node_remap_start_vaddr[nid];
+ lmem_map += sizeof(pg_data_t) + PAGE_SIZE - 1;
+ lmem_map &= PAGE_MASK;
+ free_area_init_node(nid, NODE_DATA(nid),
+ (struct page *)lmem_map, zones_size,
+ start, zholes_size);
+ }
}
return;
}
diff -Nru a/arch/i386/mm/hugetlbpage.c b/arch/i386/mm/hugetlbpage.c
--- a/arch/i386/mm/hugetlbpage.c Tue Mar 4 19:30:13 2003
+++ b/arch/i386/mm/hugetlbpage.c Tue Mar 4 19:30:13 2003
@@ -29,6 +29,8 @@
static LIST_HEAD(htlbpage_freelist);
static spinlock_t htlbpage_lock = SPIN_LOCK_UNLOCKED;
+void free_huge_page(struct page *page);
+
static struct page *alloc_hugetlb_page(void)
{
int i;
@@ -45,7 +47,7 @@
htlbpagemem--;
spin_unlock(&htlbpage_lock);
set_page_count(page, 1);
- page->lru.prev = (void *)huge_page_release;
+ page->lru.prev = (void *)free_huge_page;
for (i = 0; i < (HPAGE_SIZE/PAGE_SIZE); ++i)
clear_highpage(&page[i]);
return page;
diff -Nru a/arch/i386/mm/ioremap.c b/arch/i386/mm/ioremap.c
--- a/arch/i386/mm/ioremap.c Tue Mar 4 19:30:05 2003
+++ b/arch/i386/mm/ioremap.c Tue Mar 4 19:30:05 2003
@@ -205,6 +205,7 @@
iounmap(p);
p = NULL;
}
+ global_flush_tlb();
}
return p;
@@ -226,6 +227,7 @@
change_page_attr(virt_to_page(__va(p->phys_addr)),
p->size >> PAGE_SHIFT,
PAGE_KERNEL);
+ global_flush_tlb();
}
kfree(p);
}
diff -Nru a/arch/i386/mm/pageattr.c b/arch/i386/mm/pageattr.c
--- a/arch/i386/mm/pageattr.c Tue Mar 4 19:30:08 2003
+++ b/arch/i386/mm/pageattr.c Tue Mar 4 19:30:08 2003
@@ -130,11 +130,8 @@
}
static inline void flush_map(void)
-{
-#ifdef CONFIG_SMP
- smp_call_function(flush_kernel_map, NULL, 1, 1);
-#endif
- flush_kernel_map(NULL);
+{
+ on_each_cpu(flush_kernel_map, NULL, 1, 1);
}
struct deferred_page {
diff -Nru a/arch/i386/mm/pgtable.c b/arch/i386/mm/pgtable.c
--- a/arch/i386/mm/pgtable.c Tue Mar 4 19:30:14 2003
+++ b/arch/i386/mm/pgtable.c Tue Mar 4 19:30:14 2003
@@ -11,6 +11,7 @@
#include
#include
#include
+#include
#include
#include
diff -Nru a/arch/i386/oprofile/nmi_int.c b/arch/i386/oprofile/nmi_int.c
--- a/arch/i386/oprofile/nmi_int.c Tue Mar 4 19:30:13 2003
+++ b/arch/i386/oprofile/nmi_int.c Tue Mar 4 19:30:13 2003
@@ -95,8 +95,7 @@
* without actually triggering any NMIs as this will
* break the core code horrifically.
*/
- smp_call_function(nmi_cpu_setup, NULL, 0, 1);
- nmi_cpu_setup(0);
+ on_each_cpu(nmi_cpu_setup, NULL, 0, 1);
set_nmi_callback(nmi_callback);
oprofile_pmdev = set_nmi_pm_callback(oprofile_pm_callback);
return 0;
@@ -148,8 +147,7 @@
{
unset_nmi_pm_callback(oprofile_pmdev);
unset_nmi_callback();
- smp_call_function(nmi_cpu_shutdown, NULL, 0, 1);
- nmi_cpu_shutdown(0);
+ on_each_cpu(nmi_cpu_shutdown, NULL, 0, 1);
}
@@ -162,8 +160,7 @@
static int nmi_start(void)
{
- smp_call_function(nmi_cpu_start, NULL, 0, 1);
- nmi_cpu_start(0);
+ on_each_cpu(nmi_cpu_start, NULL, 0, 1);
return 0;
}
@@ -177,8 +174,7 @@
static void nmi_stop(void)
{
- smp_call_function(nmi_cpu_stop, NULL, 0, 1);
- nmi_cpu_stop(0);
+ on_each_cpu(nmi_cpu_stop, NULL, 0, 1);
}
diff -Nru a/arch/i386/pci/numa.c b/arch/i386/pci/numa.c
--- a/arch/i386/pci/numa.c Tue Mar 4 19:30:05 2003
+++ b/arch/i386/pci/numa.c Tue Mar 4 19:30:05 2003
@@ -17,7 +17,7 @@
{
unsigned long flags;
- if (!value || (bus > 255) || (dev > 31) || (fn > 7) || (reg > 255))
+ if (!value || (bus > MAX_MP_BUSSES) || (dev > 31) || (fn > 7) || (reg > 255))
return -EINVAL;
spin_lock_irqsave(&pci_config_lock, flags);
@@ -45,7 +45,7 @@
{
unsigned long flags;
- if ((bus > 255) || (dev > 31) || (fn > 7) || (reg > 255))
+ if ((bus > MAX_MP_BUSSES) || (dev > 31) || (fn > 7) || (reg > 255))
return -EINVAL;
spin_lock_irqsave(&pci_config_lock, flags);
diff -Nru a/arch/i386/pci/visws.c b/arch/i386/pci/visws.c
--- a/arch/i386/pci/visws.c Tue Mar 4 19:30:12 2003
+++ b/arch/i386/pci/visws.c Tue Mar 4 19:30:12 2003
@@ -52,7 +52,7 @@
pin--;
- /* Nothing usefull at PIIX4 pin 1 */
+ /* Nothing useful at PIIX4 pin 1 */
if (bus == pci_bus0 && slot == 4 && pin == 0)
return -1;
diff -Nru a/arch/ia64/ia32/ia32_signal.c b/arch/ia64/ia32/ia32_signal.c
--- a/arch/ia64/ia32/ia32_signal.c Tue Mar 4 19:30:04 2003
+++ b/arch/ia64/ia32/ia32_signal.c Tue Mar 4 19:30:04 2003
@@ -338,7 +338,7 @@
/*
* Updating fsr, fcr, fir, fdr.
* Just a bit more complicated than save.
- * - Need to make sure that we dont write any value other than the
+ * - Need to make sure that we don't write any value other than the
* specific fpstate info
* - Need to make sure that the untouched part of frs, fdr, fir, fcr
* should remain same while writing.
diff -Nru a/arch/ia64/kernel/irq.c b/arch/ia64/kernel/irq.c
--- a/arch/ia64/kernel/irq.c Tue Mar 4 19:30:13 2003
+++ b/arch/ia64/kernel/irq.c Tue Mar 4 19:30:13 2003
@@ -104,7 +104,7 @@
{
/*
* 'what should we do if we get a hw irq event on an illegal vector'.
- * each architecture has to answer this themselves, it doesnt deserve
+ * each architecture has to answer this themselves, it doesn't deserve
* a generic callback i think.
*/
#if CONFIG_X86
diff -Nru a/arch/ia64/kernel/minstate.h b/arch/ia64/kernel/minstate.h
--- a/arch/ia64/kernel/minstate.h Tue Mar 4 19:30:13 2003
+++ b/arch/ia64/kernel/minstate.h Tue Mar 4 19:30:13 2003
@@ -26,7 +26,7 @@
*/
/*
- * For ivt.s we want to access the stack virtually so we dont have to disable translation
+ * For ivt.s we want to access the stack virtually so we don't have to disable translation
* on interrupts.
*/
#define MINSTATE_START_SAVE_MIN_VIRT \
@@ -52,7 +52,7 @@
/*
* For mca_asm.S we want to access the stack physically since the state is saved before we
- * go virtual and dont want to destroy the iip or ipsr.
+ * go virtual and don't want to destroy the iip or ipsr.
*/
#define MINSTATE_START_SAVE_MIN_PHYS \
(pKStk) movl sp=ia64_init_stack+IA64_STK_OFFSET-IA64_PT_REGS_SIZE; \
diff -Nru a/arch/ia64/kernel/perfmon.c b/arch/ia64/kernel/perfmon.c
--- a/arch/ia64/kernel/perfmon.c Tue Mar 4 19:30:13 2003
+++ b/arch/ia64/kernel/perfmon.c Tue Mar 4 19:30:13 2003
@@ -718,7 +718,7 @@
/*
* counts the number of PMDS to save per entry.
- * This code is generic enough to accomodate more than 64 PMDS when they become available
+ * This code is generic enough to accommodate more than 64 PMDS when they become available
*/
static unsigned long
pfm_smpl_entry_size(unsigned long *which, unsigned long size)
diff -Nru a/arch/ia64/kernel/smp.c b/arch/ia64/kernel/smp.c
--- a/arch/ia64/kernel/smp.c Tue Mar 4 19:30:14 2003
+++ b/arch/ia64/kernel/smp.c Tue Mar 4 19:30:14 2003
@@ -206,18 +206,18 @@
void
smp_flush_tlb_all (void)
{
- smp_call_function((void (*)(void *))local_flush_tlb_all, 0, 1, 1);
- local_flush_tlb_all();
+ on_each_cpu((void (*)(void *))local_flush_tlb_all, 0, 1, 1);
}
void
smp_flush_tlb_mm (struct mm_struct *mm)
{
- local_finish_flush_tlb_mm(mm);
-
/* this happens for the common case of a single-threaded fork(): */
if (likely(mm == current->active_mm && atomic_read(&mm->mm_users) == 1))
+ {
+ local_finish_flush_tlb_mm(mm);
return;
+ }
/*
* We could optimize this further by using mm->cpu_vm_mask to track which CPUs
@@ -226,7 +226,7 @@
* anyhow, and once a CPU is interrupted, the cost of local_flush_tlb_all() is
* rather trivial.
*/
- smp_call_function((void (*)(void *))local_finish_flush_tlb_mm, mm, 1, 1);
+ on_each_cpu((void (*)(void *))local_finish_flush_tlb_mm, mm, 1, 1);
}
/*
diff -Nru a/arch/ia64/kernel/time.c b/arch/ia64/kernel/time.c
--- a/arch/ia64/kernel/time.c Tue Mar 4 19:30:05 2003
+++ b/arch/ia64/kernel/time.c Tue Mar 4 19:30:05 2003
@@ -27,7 +27,7 @@
extern unsigned long wall_jiffies;
extern unsigned long last_time_offset;
-u64 jiffies_64;
+u64 jiffies_64 = INITIAL_JIFFIES;
#ifdef CONFIG_IA64_DEBUG_IRQ
diff -Nru a/arch/ia64/lib/checksum.c b/arch/ia64/lib/checksum.c
--- a/arch/ia64/lib/checksum.c Tue Mar 4 19:30:13 2003
+++ b/arch/ia64/lib/checksum.c Tue Mar 4 19:30:13 2003
@@ -50,7 +50,7 @@
((unsigned long) ntohs(len) << 16) +
((unsigned long) proto << 8));
- /* Fold down to 32-bits so we don't loose in the typedef-less network stack. */
+ /* Fold down to 32-bits so we don't lose in the typedef-less network stack. */
/* 64 to 33 */
result = (result & 0xffffffff) + (result >> 32);
/* 33 to 32 */
diff -Nru a/arch/ia64/lib/do_csum.S b/arch/ia64/lib/do_csum.S
--- a/arch/ia64/lib/do_csum.S Tue Mar 4 19:30:04 2003
+++ b/arch/ia64/lib/do_csum.S Tue Mar 4 19:30:04 2003
@@ -41,7 +41,7 @@
// into one 8 byte word. In this case we have only one entry in the pipeline.
//
// We use a (LOAD_LATENCY+2)-stage pipeline in the loop to account for
-// possible load latency and also to accomodate for head and tail.
+// possible load latency and also to accommodate for head and tail.
//
// The end of the function deals with folding the checksum from 64bits
// down to 16bits taking care of the carry.
diff -Nru a/arch/ia64/lib/swiotlb.c b/arch/ia64/lib/swiotlb.c
--- a/arch/ia64/lib/swiotlb.c Tue Mar 4 19:30:11 2003
+++ b/arch/ia64/lib/swiotlb.c Tue Mar 4 19:30:11 2003
@@ -359,7 +359,7 @@
* was provided for in a previous swiotlb_map_single call. All other usages are
* undefined.
*
- * After this call, reads by the cpu to the buffer are guarenteed to see whatever the
+ * After this call, reads by the cpu to the buffer are guaranteed to see whatever the
* device wrote there.
*/
void
diff -Nru a/arch/ia64/mm/discontig.c b/arch/ia64/mm/discontig.c
--- a/arch/ia64/mm/discontig.c Tue Mar 4 19:30:13 2003
+++ b/arch/ia64/mm/discontig.c Tue Mar 4 19:30:13 2003
@@ -241,7 +241,7 @@
* - build the nodedir for the node. This contains pointers to
* the per-bank mem_map entries.
* - fix the page struct "virtual" pointers. These are bank specific
- * values that the paging system doesnt understand.
+ * values that the paging system doesn't understand.
* - replicate the nodedir structure to other nodes
*/
diff -Nru a/arch/ia64/mm/hugetlbpage.c b/arch/ia64/mm/hugetlbpage.c
--- a/arch/ia64/mm/hugetlbpage.c Tue Mar 4 19:30:09 2003
+++ b/arch/ia64/mm/hugetlbpage.c Tue Mar 4 19:30:09 2003
@@ -26,6 +26,8 @@
static LIST_HEAD(htlbpage_freelist);
static spinlock_t htlbpage_lock = SPIN_LOCK_UNLOCKED;
+void free_huge_page(struct page *page);
+
static struct page *alloc_hugetlb_page(void)
{
int i;
@@ -42,6 +44,7 @@
htlbpagemem--;
spin_unlock(&htlbpage_lock);
set_page_count(page, 1);
+ page->lru.prev = (void *)free_huge_page;
for (i = 0; i < (HPAGE_SIZE/PAGE_SIZE); ++i)
clear_highpage(&page[i]);
return page;
diff -Nru a/arch/ia64/sn/fakeprom/fpmem.c b/arch/ia64/sn/fakeprom/fpmem.c
--- a/arch/ia64/sn/fakeprom/fpmem.c Tue Mar 4 19:30:07 2003
+++ b/arch/ia64/sn/fakeprom/fpmem.c Tue Mar 4 19:30:07 2003
@@ -218,7 +218,7 @@
}
/*
- * Check for the node 0 hole. Since banks cant
+ * Check for the node 0 hole. Since banks can't
* span the hole, we only need to check if the end of
* the range is the end of the hole.
*/
@@ -226,7 +226,7 @@
numbytes -= NODE0_HOLE_SIZE;
/*
* UGLY hack - we must skip overr the kernel and
- * PROM runtime services but we dont exactly where it is.
+ * PROM runtime services but we don't exactly where it is.
* So lets just reserve:
* node 0
* 0-1MB for PAL
diff -Nru a/arch/ia64/sn/fakeprom/fw-emu.c b/arch/ia64/sn/fakeprom/fw-emu.c
--- a/arch/ia64/sn/fakeprom/fw-emu.c Tue Mar 4 19:30:12 2003
+++ b/arch/ia64/sn/fakeprom/fw-emu.c Tue Mar 4 19:30:12 2003
@@ -757,7 +757,7 @@
sal_systab->checksum = -checksum;
/* If the checksum is correct, the kernel tries to use the
- * table. We dont build enough table & the kernel aborts.
+ * table. We don't build enough table & the kernel aborts.
* Note that the PROM hasd thhhe same problem!!
*/
diff -Nru a/arch/ia64/sn/io/hcl.c b/arch/ia64/sn/io/hcl.c
--- a/arch/ia64/sn/io/hcl.c Tue Mar 4 19:30:12 2003
+++ b/arch/ia64/sn/io/hcl.c Tue Mar 4 19:30:12 2003
@@ -467,7 +467,7 @@
/*
* We need to clean up!
*/
- printk(KERN_WARNING "HCL: Unable to set the connect point to it's parent 0x%p\n",
+ printk(KERN_WARNING "HCL: Unable to set the connect point to its parent 0x%p\n",
(void *)new_devfs_handle);
}
diff -Nru a/arch/ia64/sn/io/l1.c b/arch/ia64/sn/io/l1.c
--- a/arch/ia64/sn/io/l1.c Tue Mar 4 19:30:13 2003
+++ b/arch/ia64/sn/io/l1.c Tue Mar 4 19:30:13 2003
@@ -2602,7 +2602,7 @@
{
sc_cq_t *q; /* receive queue */
int before_wrap, /* packet may be split into two different */
- after_wrap; /* pieces to acommodate queue wraparound */
+ after_wrap; /* pieces to accommodate queue wraparound */
/* pull message off the receive queue */
q = subch->iqp;
diff -Nru a/arch/ia64/sn/io/sn1/pcibr.c b/arch/ia64/sn/io/sn1/pcibr.c
--- a/arch/ia64/sn/io/sn1/pcibr.c Tue Mar 4 19:30:11 2003
+++ b/arch/ia64/sn/io/sn1/pcibr.c Tue Mar 4 19:30:11 2003
@@ -2647,7 +2647,7 @@
/*
* The Adaptec 1160 FC Controller WAR #767995:
* The part incorrectly ignores the upper 32 bits of a 64 bit
- * address when decoding references to it's registers so to
+ * address when decoding references to its registers so to
* keep it from responding to a bus cycle that it shouldn't
* we only use I/O space to get at it's registers. Don't
* enable memory space accesses on that PCI device.
@@ -5113,7 +5113,7 @@
/* Bridge Hardware Bug WAR #484930:
* Bridge can't handle updating External ATEs
- * while DMA is occuring that uses External ATEs,
+ * while DMA is occurring that uses External ATEs,
* even if the particular ATEs involved are disjoint.
*/
@@ -6844,7 +6844,7 @@
*
* This is the pcibr interrupt "wrapper" function that is called,
* in interrupt context, to initiate the interrupt handler(s) registered
- * (via pcibr_intr_alloc/connect) for the occuring interrupt. Non-threaded
+ * (via pcibr_intr_alloc/connect) for the occurring interrupt. Non-threaded
* handlers will be called directly, and threaded handlers will have their
* thread woken up.
*/
diff -Nru a/arch/ia64/sn/io/sn2/pcibr/pcibr_ate.c b/arch/ia64/sn/io/sn2/pcibr/pcibr_ate.c
--- a/arch/ia64/sn/io/sn2/pcibr/pcibr_ate.c Tue Mar 4 19:30:04 2003
+++ b/arch/ia64/sn/io/sn2/pcibr/pcibr_ate.c Tue Mar 4 19:30:04 2003
@@ -362,7 +362,7 @@
/* Bridge Hardware Bug WAR #484930:
* Bridge can't handle updating External ATEs
- * while DMA is occuring that uses External ATEs,
+ * while DMA is occurring that uses External ATEs,
* even if the particular ATEs involved are disjoint.
*/
diff -Nru a/arch/ia64/sn/io/sn2/pcibr/pcibr_dvr.c b/arch/ia64/sn/io/sn2/pcibr/pcibr_dvr.c
--- a/arch/ia64/sn/io/sn2/pcibr/pcibr_dvr.c Tue Mar 4 19:30:14 2003
+++ b/arch/ia64/sn/io/sn2/pcibr/pcibr_dvr.c Tue Mar 4 19:30:14 2003
@@ -849,7 +849,7 @@
* will set the c_slot (which is suppose to represent the external
* slot (i.e the slot number silk screened on the back of the I/O
* brick)). So for PIC we need to adjust this "internal slot" num
- * passed into us, into it's external representation. See comment
+ * passed into us, into its external representation. See comment
* for the PCIBR_DEVICE_TO_SLOT macro for more information.
*/
NEW(pcibr_info);
@@ -1527,7 +1527,7 @@
/* enable parity checking on PICs internal RAM */
pic_ctrl_reg |= PIC_CTRL_PAR_EN_RESP;
pic_ctrl_reg |= PIC_CTRL_PAR_EN_ATE;
- /* PIC BRINGUP WAR (PV# 862253): dont enable write request
+ /* PIC BRINGUP WAR (PV# 862253): don't enable write request
* parity checking.
*/
if (!PCIBR_WAR_ENABLED(PV862253, pcibr_soft)) {
diff -Nru a/arch/ia64/sn/io/sn2/pcibr/pcibr_error.c b/arch/ia64/sn/io/sn2/pcibr/pcibr_error.c
--- a/arch/ia64/sn/io/sn2/pcibr/pcibr_error.c Tue Mar 4 19:30:04 2003
+++ b/arch/ia64/sn/io/sn2/pcibr/pcibr_error.c Tue Mar 4 19:30:04 2003
@@ -1806,7 +1806,7 @@
*
* CAUTION: Resetting bit BRIDGE_IRR_PCI_GRP_CLR, acknowledges
* a group of interrupts. If while handling this error,
- * some other error has occured, that would be
+ * some other error has occurred, that would be
* implicitly cleared by this write.
* Need a way to ensure we don't inadvertently clear some
* other errors.
diff -Nru a/arch/ia64/sn/io/sn2/pcibr/pcibr_intr.c b/arch/ia64/sn/io/sn2/pcibr/pcibr_intr.c
--- a/arch/ia64/sn/io/sn2/pcibr/pcibr_intr.c Tue Mar 4 19:30:12 2003
+++ b/arch/ia64/sn/io/sn2/pcibr/pcibr_intr.c Tue Mar 4 19:30:12 2003
@@ -842,7 +842,7 @@
*
* This is the pcibr interrupt "wrapper" function that is called,
* in interrupt context, to initiate the interrupt handler(s) registered
- * (via pcibr_intr_alloc/connect) for the occuring interrupt. Non-threaded
+ * (via pcibr_intr_alloc/connect) for the occurring interrupt. Non-threaded
* handlers will be called directly, and threaded handlers will have their
* thread woken up.
*/
diff -Nru a/arch/ia64/sn/io/sn2/pcibr/pcibr_slot.c b/arch/ia64/sn/io/sn2/pcibr/pcibr_slot.c
--- a/arch/ia64/sn/io/sn2/pcibr/pcibr_slot.c Tue Mar 4 19:30:08 2003
+++ b/arch/ia64/sn/io/sn2/pcibr/pcibr_slot.c Tue Mar 4 19:30:08 2003
@@ -803,7 +803,7 @@
* 'min_gnt' and attempt to calculate a latency time.
*
* NOTE: For now if the device is on the 'real time' arbitration
- * ring we dont set the latency timer.
+ * ring we don't set the latency timer.
*
* WAR: SGI's IOC3 and RAD devices target abort if you write a
* single byte into their config space. So don't set the Latency
@@ -852,7 +852,7 @@
}
/* Get the PCI-X capability if running in PCI-X mode. If the func
- * doesnt have a pcix capability, allocate a PCIIO_VENDOR_ID_NONE
+ * doesn't have a pcix capability, allocate a PCIIO_VENDOR_ID_NONE
* pcibr_info struct so the device driver for that function is not
* called.
*/
@@ -1449,7 +1449,7 @@
/*
* The Adaptec 1160 FC Controller WAR #767995:
* The part incorrectly ignores the upper 32 bits of a 64 bit
- * address when decoding references to it's registers so to
+ * address when decoding references to its registers so to
* keep it from responding to a bus cycle that it shouldn't
* we only use I/O space to get at it's registers. Don't
* enable memory space accesses on that PCI device.
diff -Nru a/arch/ia64/sn/kernel/llsc4.c b/arch/ia64/sn/kernel/llsc4.c
--- a/arch/ia64/sn/kernel/llsc4.c Tue Mar 4 19:30:12 2003
+++ b/arch/ia64/sn/kernel/llsc4.c Tue Mar 4 19:30:12 2003
@@ -301,7 +301,7 @@
*/
linei = randn(linecount, &seed);
sharei = randn(2, &seed);
- slinei = (linei + (linecount/2))%linecount; /* I dont like this - fix later */
+ slinei = (linei + (linecount/2))%linecount; /* I don't like this - fix later */
linep = (dataline_t *)blocks[linei];
slinep = (dataline_t *)blocks[slinei];
diff -Nru a/arch/ia64/sn/kernel/setup.c b/arch/ia64/sn/kernel/setup.c
--- a/arch/ia64/sn/kernel/setup.c Tue Mar 4 19:30:05 2003
+++ b/arch/ia64/sn/kernel/setup.c Tue Mar 4 19:30:05 2003
@@ -153,7 +153,7 @@
/**
* early_sn_setup - early setup routine for SN platforms
*
- * Sets up an intial console to aid debugging. Intended primarily
+ * Sets up an initial console to aid debugging. Intended primarily
* for bringup, it's only called if %BRINGUP and %CONFIG_IA64_EARLY_PRINTK
* are turned on. See start_kernel() in init/main.c.
*/
@@ -172,7 +172,7 @@
/*
* Parse enough of the SAL tables to locate the SAL entry point. Since, console
- * IO on SN2 is done via SAL calls, early_printk wont work without this.
+ * IO on SN2 is done via SAL calls, early_printk won't work without this.
*
* This code duplicates some of the ACPI table parsing that is in efi.c & sal.c.
* Any changes to those file may have to be made hereas well.
diff -Nru a/arch/ia64/sn/kernel/sn1/sn1_smp.c b/arch/ia64/sn/kernel/sn1/sn1_smp.c
--- a/arch/ia64/sn/kernel/sn1/sn1_smp.c Tue Mar 4 19:30:07 2003
+++ b/arch/ia64/sn/kernel/sn1/sn1_smp.c Tue Mar 4 19:30:07 2003
@@ -100,7 +100,7 @@
/*
* The following table/struct is for remembering PTC coherency domains. It
- * is also used to translate sapicid into cpuids. We dont want to start
+ * is also used to translate sapicid into cpuids. We don't want to start
* cpus unless we know their cache domain.
*/
#ifdef PTC_NOTYET
diff -Nru a/arch/ia64/sn/kernel/sn2/sn2_smp.c b/arch/ia64/sn/kernel/sn2/sn2_smp.c
--- a/arch/ia64/sn/kernel/sn2/sn2_smp.c Tue Mar 4 19:30:13 2003
+++ b/arch/ia64/sn/kernel/sn2/sn2_smp.c Tue Mar 4 19:30:13 2003
@@ -395,7 +395,7 @@
mycnode = local_nodeid;
/*
- * For now, we dont want to spin uninterruptibly waiting
+ * For now, we don't want to spin uninterruptibly waiting
* for the lock. Makes hangs hard to debug.
*/
local_irq_save(flags);
@@ -506,7 +506,7 @@
pio_phys_write_mmr(p, val);
#ifndef CONFIG_SHUB_1_0_SPECIFIC
- /* doesnt work on shub 1.0 */
+ /* doesn't work on shub 1.0 */
wait_piowc();
#endif
}
diff -Nru a/arch/m68k/ifpsp060/src/fpsp.S b/arch/m68k/ifpsp060/src/fpsp.S
--- a/arch/m68k/ifpsp060/src/fpsp.S Tue Mar 4 19:30:14 2003
+++ b/arch/m68k/ifpsp060/src/fpsp.S Tue Mar 4 19:30:14 2003
@@ -2201,7 +2201,7 @@
mov.l LOCAL_SIZE+2+EXC_PC(%sp),LOCAL_SIZE+2+EXC_PC-0xc(%sp)
mov.l LOCAL_SIZE+EXC_EA(%sp),LOCAL_SIZE+EXC_EA-0xc(%sp)
-# now, we copy the default result to it's proper location
+# now, we copy the default result to its proper location
mov.l LOCAL_SIZE+FP_DST_EX(%sp),LOCAL_SIZE+0x4(%sp)
mov.l LOCAL_SIZE+FP_DST_HI(%sp),LOCAL_SIZE+0x8(%sp)
mov.l LOCAL_SIZE+FP_DST_LO(%sp),LOCAL_SIZE+0xc(%sp)
@@ -2241,7 +2241,7 @@
mov.l LOCAL_SIZE+2+EXC_PC(%sp),LOCAL_SIZE+2+EXC_PC-0xc(%sp)
mov.l LOCAL_SIZE+EXC_EA(%sp),LOCAL_SIZE+EXC_EA-0xc(%sp)
-# now, we copy the default result to it's proper location
+# now, we copy the default result to its proper location
mov.l LOCAL_SIZE+FP_DST_EX(%sp),LOCAL_SIZE+0x4(%sp)
mov.l LOCAL_SIZE+FP_DST_HI(%sp),LOCAL_SIZE+0x8(%sp)
mov.l LOCAL_SIZE+FP_DST_LO(%sp),LOCAL_SIZE+0xc(%sp)
@@ -2281,7 +2281,7 @@
mov.l LOCAL_SIZE+2+EXC_PC(%sp),LOCAL_SIZE+2+EXC_PC-0xc(%sp)
mov.l LOCAL_SIZE+EXC_EA(%sp),LOCAL_SIZE+EXC_EA-0xc(%sp)
-# now, we copy the default result to it's proper location
+# now, we copy the default result to its proper location
mov.l LOCAL_SIZE+FP_DST_EX(%sp),LOCAL_SIZE+0x4(%sp)
mov.l LOCAL_SIZE+FP_DST_HI(%sp),LOCAL_SIZE+0x8(%sp)
mov.l LOCAL_SIZE+FP_DST_LO(%sp),LOCAL_SIZE+0xc(%sp)
diff -Nru a/arch/m68k/ifpsp060/src/isp.S b/arch/m68k/ifpsp060/src/isp.S
--- a/arch/m68k/ifpsp060/src/isp.S Tue Mar 4 19:30:05 2003
+++ b/arch/m68k/ifpsp060/src/isp.S Tue Mar 4 19:30:05 2003
@@ -843,7 +843,7 @@
bra.l _real_access
# if the addressing mode was (an)+ or -(an), the address register must
-# be restored to it's pre-exception value before entering _real_access.
+# be restored to its pre-exception value before entering _real_access.
isp_restore:
cmpi.b SPCOND_FLG(%a6),&restore_flg # do we need a restore?
bne.b isp_restore_done # no
diff -Nru a/arch/m68k/ifpsp060/src/pfpsp.S b/arch/m68k/ifpsp060/src/pfpsp.S
--- a/arch/m68k/ifpsp060/src/pfpsp.S Tue Mar 4 19:30:04 2003
+++ b/arch/m68k/ifpsp060/src/pfpsp.S Tue Mar 4 19:30:04 2003
@@ -2200,7 +2200,7 @@
mov.l LOCAL_SIZE+2+EXC_PC(%sp),LOCAL_SIZE+2+EXC_PC-0xc(%sp)
mov.l LOCAL_SIZE+EXC_EA(%sp),LOCAL_SIZE+EXC_EA-0xc(%sp)
-# now, we copy the default result to it's proper location
+# now, we copy the default result to its proper location
mov.l LOCAL_SIZE+FP_DST_EX(%sp),LOCAL_SIZE+0x4(%sp)
mov.l LOCAL_SIZE+FP_DST_HI(%sp),LOCAL_SIZE+0x8(%sp)
mov.l LOCAL_SIZE+FP_DST_LO(%sp),LOCAL_SIZE+0xc(%sp)
@@ -2240,7 +2240,7 @@
mov.l LOCAL_SIZE+2+EXC_PC(%sp),LOCAL_SIZE+2+EXC_PC-0xc(%sp)
mov.l LOCAL_SIZE+EXC_EA(%sp),LOCAL_SIZE+EXC_EA-0xc(%sp)
-# now, we copy the default result to it's proper location
+# now, we copy the default result to its proper location
mov.l LOCAL_SIZE+FP_DST_EX(%sp),LOCAL_SIZE+0x4(%sp)
mov.l LOCAL_SIZE+FP_DST_HI(%sp),LOCAL_SIZE+0x8(%sp)
mov.l LOCAL_SIZE+FP_DST_LO(%sp),LOCAL_SIZE+0xc(%sp)
@@ -2280,7 +2280,7 @@
mov.l LOCAL_SIZE+2+EXC_PC(%sp),LOCAL_SIZE+2+EXC_PC-0xc(%sp)
mov.l LOCAL_SIZE+EXC_EA(%sp),LOCAL_SIZE+EXC_EA-0xc(%sp)
-# now, we copy the default result to it's proper location
+# now, we copy the default result to its proper location
mov.l LOCAL_SIZE+FP_DST_EX(%sp),LOCAL_SIZE+0x4(%sp)
mov.l LOCAL_SIZE+FP_DST_HI(%sp),LOCAL_SIZE+0x8(%sp)
mov.l LOCAL_SIZE+FP_DST_LO(%sp),LOCAL_SIZE+0xc(%sp)
diff -Nru a/arch/m68k/kernel/head.S b/arch/m68k/kernel/head.S
--- a/arch/m68k/kernel/head.S Tue Mar 4 19:30:14 2003
+++ b/arch/m68k/kernel/head.S Tue Mar 4 19:30:14 2003
@@ -3127,7 +3127,7 @@
moveb %d0,M162_SCC_CTRL_A
jra 3f
5:
- /* 166/167/177; its a CD2401 */
+ /* 166/167/177; it's a CD2401 */
moveb #0,M167_CYCAR
moveb M167_CYIER,%d2
moveb #0x02,M167_CYIER
diff -Nru a/arch/m68k/kernel/time.c b/arch/m68k/kernel/time.c
--- a/arch/m68k/kernel/time.c Tue Mar 4 19:30:05 2003
+++ b/arch/m68k/kernel/time.c Tue Mar 4 19:30:05 2003
@@ -26,7 +26,7 @@
#include
#include
-u64 jiffies_64;
+u64 jiffies_64 = INITIAL_JIFFIES;
static inline int set_rtc_mmss(unsigned long nowtime)
{
diff -Nru a/arch/m68k/math-emu/fp_util.S b/arch/m68k/math-emu/fp_util.S
--- a/arch/m68k/math-emu/fp_util.S Tue Mar 4 19:30:14 2003
+++ b/arch/m68k/math-emu/fp_util.S Tue Mar 4 19:30:14 2003
@@ -49,7 +49,7 @@
* is currently at that time unused, be careful if you want change
* something here. %d0 and %d1 is always usable, sometimes %d2 (or
* only the lower half) most function have to return the %a0
- * unmodified, so that the caller can immediatly reuse it.
+ * unmodified, so that the caller can immediately reuse it.
*/
.globl fp_ill, fp_end
diff -Nru a/arch/m68k/q40/README b/arch/m68k/q40/README
--- a/arch/m68k/q40/README Tue Mar 4 19:30:03 2003
+++ b/arch/m68k/q40/README Tue Mar 4 19:30:03 2003
@@ -16,7 +16,7 @@
particular device drivers.
The floppy imposes a very high interrupt load on the CPU, approx 30K/s.
-When something blocks interrupts (HD) it will loose some of them, so far
+When something blocks interrupts (HD) it will lose some of them, so far
this is not known to have caused any data loss. On highly loaded systems
it can make the floppy very slow or practically stop. Other Q40 OS' simply
poll the floppy for this reason - something that can't be done in Linux.
diff -Nru a/arch/m68k/sun3/config.c b/arch/m68k/sun3/config.c
--- a/arch/m68k/sun3/config.c Tue Mar 4 19:30:11 2003
+++ b/arch/m68k/sun3/config.c Tue Mar 4 19:30:11 2003
@@ -119,7 +119,7 @@
{
unsigned long start_page;
- /* align start/end to page boundries */
+ /* align start/end to page boundaries */
memory_start = ((memory_start + (PAGE_SIZE-1)) & PAGE_MASK);
memory_end = memory_end & PAGE_MASK;
diff -Nru a/arch/m68knommu/kernel/ints.c b/arch/m68knommu/kernel/ints.c
--- a/arch/m68knommu/kernel/ints.c Tue Mar 4 19:30:05 2003
+++ b/arch/m68knommu/kernel/ints.c Tue Mar 4 19:30:05 2003
@@ -214,7 +214,7 @@
/*
* Do we need these probe functions on the m68k?
*
- * ... may be usefull with ISA devices
+ * ... may be useful with ISA devices
*/
unsigned long probe_irq_on (void)
{
diff -Nru a/arch/m68knommu/kernel/syscalltable.S b/arch/m68knommu/kernel/syscalltable.S
--- a/arch/m68knommu/kernel/syscalltable.S Tue Mar 4 19:30:03 2003
+++ b/arch/m68knommu/kernel/syscalltable.S Tue Mar 4 19:30:03 2003
@@ -14,6 +14,7 @@
#include
#include
#include
+#include
.text
ALIGN
diff -Nru a/arch/m68knommu/kernel/time.c b/arch/m68knommu/kernel/time.c
--- a/arch/m68knommu/kernel/time.c Tue Mar 4 19:30:11 2003
+++ b/arch/m68knommu/kernel/time.c Tue Mar 4 19:30:11 2003
@@ -26,7 +26,7 @@
#define TICK_SIZE (tick_nsec / 1000)
-u64 jiffies_64;
+u64 jiffies_64 = INITIAL_JIFFIES;
static inline int set_rtc_mmss(unsigned long nowtime)
{
diff -Nru a/arch/m68knommu/platform/5307/entry.S b/arch/m68knommu/platform/5307/entry.S
--- a/arch/m68knommu/platform/5307/entry.S Tue Mar 4 19:30:14 2003
+++ b/arch/m68knommu/platform/5307/entry.S Tue Mar 4 19:30:14 2003
@@ -27,6 +27,7 @@
#include
#include
#include
+#include
#include
#include
#include
diff -Nru a/arch/m68knommu/platform/5307/vectors.c b/arch/m68knommu/platform/5307/vectors.c
--- a/arch/m68knommu/platform/5307/vectors.c Tue Mar 4 19:30:13 2003
+++ b/arch/m68knommu/platform/5307/vectors.c Tue Mar 4 19:30:13 2003
@@ -13,6 +13,7 @@
#include
#include
#include
+#include
#include
#include
#include
diff -Nru a/arch/m68knommu/platform/68360/ints.c b/arch/m68knommu/platform/68360/ints.c
--- a/arch/m68knommu/platform/68360/ints.c Tue Mar 4 19:30:03 2003
+++ b/arch/m68knommu/platform/68360/ints.c Tue Mar 4 19:30:03 2003
@@ -291,7 +291,7 @@
/* unsigned long pend = *(volatile unsigned long *)pquicc->intr_cipr; */
- /* Bugger all that wierdness. For the moment, I seem to know where I came from;
+ /* Bugger all that weirdness. For the moment, I seem to know where I came from;
* vec is passed from a specific ISR, so I'll use it. */
if (int_irq_list[irq] && int_irq_list[irq]->handler) {
diff -Nru a/arch/mips/au1000/common/serial.c b/arch/mips/au1000/common/serial.c
--- a/arch/mips/au1000/common/serial.c Tue Mar 4 19:30:05 2003
+++ b/arch/mips/au1000/common/serial.c Tue Mar 4 19:30:05 2003
@@ -2703,7 +2703,7 @@
* port exists and is in use an error is returned. If the port
* is not currently in the table it is added.
*
- * The port is then probed and if neccessary the IRQ is autodetected
+ * The port is then probed and if necessary the IRQ is autodetected
* If this fails an error is returned.
*
* On success the port is ready to use and the line number is returned.
diff -Nru a/arch/mips/baget/wbflush.c b/arch/mips/baget/wbflush.c
--- a/arch/mips/baget/wbflush.c Tue Mar 4 19:30:11 2003
+++ b/arch/mips/baget/wbflush.c Tue Mar 4 19:30:11 2003
@@ -17,7 +17,7 @@
}
/*
- * Baget/MIPS doesnt need to write back the WB.
+ * Baget/MIPS doesn't need to write back the WB.
*/
static void wbflush_baget(void)
{
diff -Nru a/arch/mips/ddb5xxx/common/pci.c b/arch/mips/ddb5xxx/common/pci.c
--- a/arch/mips/ddb5xxx/common/pci.c Tue Mar 4 19:30:14 2003
+++ b/arch/mips/ddb5xxx/common/pci.c Tue Mar 4 19:30:14 2003
@@ -20,7 +20,7 @@
* Strategies:
*
* . We rely on pci_auto.c file to assign PCI resources (MEM and IO)
- * TODO: this shold be optional for some machines where they do have
+ * TODO: this should be optional for some machines where they do have
* a real "pcibios" that does resource assignment.
*
* . We then use pci_scan_bus() to "discover" all the resources for
diff -Nru a/arch/mips/dec/boot/decstation.c b/arch/mips/dec/boot/decstation.c
--- a/arch/mips/dec/boot/decstation.c Tue Mar 4 19:30:09 2003
+++ b/arch/mips/dec/boot/decstation.c Tue Mar 4 19:30:09 2003
@@ -70,7 +70,7 @@
#ifdef RELOC
/*
- * Now copy kernel image to it's destination.
+ * Now copy kernel image to its destination.
*/
len = ((unsigned long) (&_end) - k_start);
memcpy((void *)k_start, &_ftext, len);
diff -Nru a/arch/mips/kernel/irq.c b/arch/mips/kernel/irq.c
--- a/arch/mips/kernel/irq.c Tue Mar 4 19:30:04 2003
+++ b/arch/mips/kernel/irq.c Tue Mar 4 19:30:04 2003
@@ -44,7 +44,7 @@
{
/*
* 'what should we do if we get a hw irq event on an illegal vector'.
- * each architecture has to answer this themselves, it doesnt deserve
+ * each architecture has to answer this themselves, it doesn't deserve
* a generic callback i think.
*/
printk("unexpected interrupt %d\n", irq);
diff -Nru a/arch/mips/kernel/pci.c b/arch/mips/kernel/pci.c
--- a/arch/mips/kernel/pci.c Tue Mar 4 19:30:11 2003
+++ b/arch/mips/kernel/pci.c Tue Mar 4 19:30:11 2003
@@ -19,7 +19,7 @@
* Strategies:
*
* . We rely on pci_auto.c file to assign PCI resources (MEM and IO)
- * TODO: this shold be optional for some machines where they do have
+ * TODO: this should be optional for some machines where they do have
* a real "pcibios" that does resource assignment.
*
* . We then use pci_scan_bus() to "discover" all the resources for
diff -Nru a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
--- a/arch/mips/kernel/process.c Tue Mar 4 19:30:08 2003
+++ b/arch/mips/kernel/process.c Tue Mar 4 19:30:08 2003
@@ -112,7 +112,7 @@
p->thread.reg31 = (unsigned long) ret_from_fork;
/*
- * New tasks loose permission to use the fpu. This accelerates context
+ * New tasks lose permission to use the fpu. This accelerates context
* switching for most programs since they don't use the fpu.
*/
p->thread.cp0_status = read_32bit_cp0_register(CP0_STATUS) &
diff -Nru a/arch/mips/kernel/r2300_misc.S b/arch/mips/kernel/r2300_misc.S
--- a/arch/mips/kernel/r2300_misc.S Tue Mar 4 19:30:12 2003
+++ b/arch/mips/kernel/r2300_misc.S Tue Mar 4 19:30:12 2003
@@ -76,7 +76,7 @@
/* Check is PTE is present, if not then jump to LABEL.
* PTR points to the page table where this PTE is located,
* when the macro is done executing PTE will be restored
- * with it's original value.
+ * with its original value.
*/
#define PTE_PRESENT(pte, ptr, label) \
andi pte, pte, (_PAGE_PRESENT | _PAGE_READ); \
diff -Nru a/arch/mips/kernel/r2300_switch.S b/arch/mips/kernel/r2300_switch.S
--- a/arch/mips/kernel/r2300_switch.S Tue Mar 4 19:30:13 2003
+++ b/arch/mips/kernel/r2300_switch.S Tue Mar 4 19:30:13 2003
@@ -80,7 +80,7 @@
beqz a0, 2f # Save floating point state
nor t3, zero, t3
.set reorder
- lw t1, ST_OFF(a0) # last thread looses fpu
+ lw t1, ST_OFF(a0) # last thread loses fpu
and t1, t3
sw t1, ST_OFF(a0)
FPU_SAVE_SINGLE(a0, t1) # clobbers t1
@@ -108,7 +108,7 @@
/*
* Load the FPU with signalling NANS. This bit pattern we're using has
- * the property that no matter wether considered as single or as double
+ * the property that no matter whether considered as single or as double
* precission represents signaling NANS.
*
* We initialize fcr31 to rounding to nearest, no exceptions.
diff -Nru a/arch/mips/kernel/r4k_misc.S b/arch/mips/kernel/r4k_misc.S
--- a/arch/mips/kernel/r4k_misc.S Tue Mar 4 19:30:05 2003
+++ b/arch/mips/kernel/r4k_misc.S Tue Mar 4 19:30:05 2003
@@ -93,7 +93,7 @@
/* Check is PTE is present, if not then jump to LABEL.
* PTR points to the page table where this PTE is located,
* when the macro is done executing PTE will be restored
- * with it's original value.
+ * with its original value.
*/
#define PTE_PRESENT(pte, ptr, label) \
andi pte, pte, (_PAGE_PRESENT | _PAGE_READ); \
diff -Nru a/arch/mips/kernel/r4k_switch.S b/arch/mips/kernel/r4k_switch.S
--- a/arch/mips/kernel/r4k_switch.S Tue Mar 4 19:30:13 2003
+++ b/arch/mips/kernel/r4k_switch.S Tue Mar 4 19:30:13 2003
@@ -85,7 +85,7 @@
beqz a0, 2f # Save floating point state
nor t3, zero, t3
- lw t1, ST_OFF(a0) # last thread looses fpu
+ lw t1, ST_OFF(a0) # last thread loses fpu
and t1, t3
sw t1, ST_OFF(a0)
diff -Nru a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
--- a/arch/mips/kernel/setup.c Tue Mar 4 19:30:05 2003
+++ b/arch/mips/kernel/setup.c Tue Mar 4 19:30:05 2003
@@ -775,7 +775,7 @@
request_resource(&iomem_resource, res);
/*
- * We dont't know which RAM region contains kernel data,
+ * We don't know which RAM region contains kernel data,
* so we try it repeatedly and let the resource manager
* test it.
*/
diff -Nru a/arch/mips/kernel/time.c b/arch/mips/kernel/time.c
--- a/arch/mips/kernel/time.c Tue Mar 4 19:30:08 2003
+++ b/arch/mips/kernel/time.c Tue Mar 4 19:30:08 2003
@@ -32,7 +32,7 @@
#define USECS_PER_JIFFY (1000000/HZ)
#define USECS_PER_JIFFY_FRAC ((1000000ULL << 32) / HZ & 0xffffffff)
-u64 jiffies_64;
+u64 jiffies_64 = INITIAL_JIFFIES;
/*
* forward reference
diff -Nru a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
--- a/arch/mips/kernel/traps.c Tue Mar 4 19:30:08 2003
+++ b/arch/mips/kernel/traps.c Tue Mar 4 19:30:08 2003
@@ -793,7 +793,7 @@
/* Some firmware leaves the BEV flag set, clear it. */
clear_cp0_status(ST0_BEV);
- /* Copy the generic exception handler code to it's final destination. */
+ /* Copy the generic exception handler code to its final destination. */
memcpy((void *)(KSEG0 + 0x80), &except_vec1_generic, 0x80);
memcpy((void *)(KSEG0 + 0x100), &except_vec2_generic, 0x80);
memcpy((void *)(KSEG0 + 0x180), &except_vec3_generic, 0x80);
@@ -805,7 +805,7 @@
set_except_vector(i, handle_reserved);
/*
- * Copy the EJTAG debug exception vector handler code to it's final
+ * Copy the EJTAG debug exception vector handler code to its final
* destination.
*/
memcpy((void *)(KSEG0 + 0x300), &except_vec_ejtag_debug, 0x80);
diff -Nru a/arch/mips/math-emu/dp_add.c b/arch/mips/math-emu/dp_add.c
--- a/arch/mips/math-emu/dp_add.c Tue Mar 4 19:30:05 2003
+++ b/arch/mips/math-emu/dp_add.c Tue Mar 4 19:30:05 2003
@@ -73,7 +73,7 @@
return x;
- /* Inifity handeling
+ /* Inifity handling
*/
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_INF):
@@ -92,7 +92,7 @@
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_DNORM):
return x;
- /* Zero handeling
+ /* Zero handling
*/
case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_ZERO):
diff -Nru a/arch/mips/math-emu/dp_div.c b/arch/mips/math-emu/dp_div.c
--- a/arch/mips/math-emu/dp_div.c Tue Mar 4 19:30:04 2003
+++ b/arch/mips/math-emu/dp_div.c Tue Mar 4 19:30:04 2003
@@ -72,7 +72,7 @@
return x;
- /* Infinity handeling
+ /* Infinity handling
*/
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_INF):
@@ -89,7 +89,7 @@
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_DNORM):
return ieee754dp_inf(xs ^ ys);
- /* Zero handeling
+ /* Zero handling
*/
case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_ZERO):
diff -Nru a/arch/mips/math-emu/dp_mul.c b/arch/mips/math-emu/dp_mul.c
--- a/arch/mips/math-emu/dp_mul.c Tue Mar 4 19:30:08 2003
+++ b/arch/mips/math-emu/dp_mul.c Tue Mar 4 19:30:08 2003
@@ -72,7 +72,7 @@
return x;
- /* Infinity handeling */
+ /* Infinity handling */
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_ZERO):
case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_INF):
diff -Nru a/arch/mips/math-emu/dp_sub.c b/arch/mips/math-emu/dp_sub.c
--- a/arch/mips/math-emu/dp_sub.c Tue Mar 4 19:30:11 2003
+++ b/arch/mips/math-emu/dp_sub.c Tue Mar 4 19:30:11 2003
@@ -72,7 +72,7 @@
return x;
- /* Inifity handeling
+ /* Inifity handling
*/
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_INF):
@@ -91,7 +91,7 @@
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_DNORM):
return x;
- /* Zero handeling
+ /* Zero handling
*/
case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_ZERO):
diff -Nru a/arch/mips/math-emu/ieee754.c b/arch/mips/math-emu/ieee754.c
--- a/arch/mips/math-emu/ieee754.c Tue Mar 4 19:30:04 2003
+++ b/arch/mips/math-emu/ieee754.c Tue Mar 4 19:30:04 2003
@@ -3,7 +3,7 @@
*
* BUGS
* not much dp done
- * doesnt generate IEEE754_INEXACT
+ * doesn't generate IEEE754_INEXACT
*
*/
/*
diff -Nru a/arch/mips/math-emu/ieee754dp.c b/arch/mips/math-emu/ieee754dp.c
--- a/arch/mips/math-emu/ieee754dp.c Tue Mar 4 19:30:11 2003
+++ b/arch/mips/math-emu/ieee754dp.c Tue Mar 4 19:30:11 2003
@@ -99,14 +99,14 @@
}
-/* generate a normal/denormal number with over,under handeling
+/* generate a normal/denormal number with over,under handling
* sn is sign
* xe is an unbiased exponent
* xm is 3bit extended precision value.
*/
ieee754dp ieee754dp_format(int sn, int xe, unsigned long long xm)
{
- assert(xm); /* we dont gen exact zeros (probably should) */
+ assert(xm); /* we don't gen exact zeros (probably should) */
assert((xm >> (DP_MBITS + 1 + 3)) == 0); /* no execess */
assert(xm & (DP_HIDDEN_BIT << 3));
diff -Nru a/arch/mips/math-emu/ieee754sp.c b/arch/mips/math-emu/ieee754sp.c
--- a/arch/mips/math-emu/ieee754sp.c Tue Mar 4 19:30:06 2003
+++ b/arch/mips/math-emu/ieee754sp.c Tue Mar 4 19:30:06 2003
@@ -100,14 +100,14 @@
}
-/* generate a normal/denormal number with over,under handeling
+/* generate a normal/denormal number with over,under handling
* sn is sign
* xe is an unbiased exponent
* xm is 3bit extended precision value.
*/
ieee754sp ieee754sp_format(int sn, int xe, unsigned xm)
{
- assert(xm); /* we dont gen exact zeros (probably should) */
+ assert(xm); /* we don't gen exact zeros (probably should) */
assert((xm >> (SP_MBITS + 1 + 3)) == 0); /* no execess */
assert(xm & (SP_HIDDEN_BIT << 3));
diff -Nru a/arch/mips/math-emu/sp_add.c b/arch/mips/math-emu/sp_add.c
--- a/arch/mips/math-emu/sp_add.c Tue Mar 4 19:30:12 2003
+++ b/arch/mips/math-emu/sp_add.c Tue Mar 4 19:30:12 2003
@@ -72,7 +72,7 @@
return x;
- /* Inifity handeling
+ /* Inifity handling
*/
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_INF):
@@ -91,7 +91,7 @@
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_DNORM):
return x;
- /* Zero handeling
+ /* Zero handling
*/
case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_ZERO):
diff -Nru a/arch/mips/math-emu/sp_div.c b/arch/mips/math-emu/sp_div.c
--- a/arch/mips/math-emu/sp_div.c Tue Mar 4 19:30:12 2003
+++ b/arch/mips/math-emu/sp_div.c Tue Mar 4 19:30:12 2003
@@ -72,7 +72,7 @@
return x;
- /* Infinity handeling
+ /* Infinity handling
*/
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_INF):
@@ -89,7 +89,7 @@
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_DNORM):
return ieee754sp_inf(xs ^ ys);
- /* Zero handeling
+ /* Zero handling
*/
case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_ZERO):
diff -Nru a/arch/mips/math-emu/sp_fdp.c b/arch/mips/math-emu/sp_fdp.c
--- a/arch/mips/math-emu/sp_fdp.c Tue Mar 4 19:30:08 2003
+++ b/arch/mips/math-emu/sp_fdp.c Tue Mar 4 19:30:08 2003
@@ -49,7 +49,7 @@
case IEEE754_CLASS_ZERO:
return ieee754sp_zero(xs);
case IEEE754_CLASS_DNORM:
- /* cant possibly be sp representable */
+ /* can't possibly be sp representable */
SETCX(IEEE754_UNDERFLOW);
return ieee754sp_xcpt(ieee754sp_zero(xs), "fdp", x);
case IEEE754_CLASS_NORM:
diff -Nru a/arch/mips/math-emu/sp_mul.c b/arch/mips/math-emu/sp_mul.c
--- a/arch/mips/math-emu/sp_mul.c Tue Mar 4 19:30:10 2003
+++ b/arch/mips/math-emu/sp_mul.c Tue Mar 4 19:30:10 2003
@@ -72,7 +72,7 @@
return x;
- /* Infinity handeling */
+ /* Infinity handling */
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_ZERO):
case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_INF):
diff -Nru a/arch/mips/math-emu/sp_sub.c b/arch/mips/math-emu/sp_sub.c
--- a/arch/mips/math-emu/sp_sub.c Tue Mar 4 19:30:07 2003
+++ b/arch/mips/math-emu/sp_sub.c Tue Mar 4 19:30:07 2003
@@ -72,7 +72,7 @@
return x;
- /* Inifity handeling
+ /* Inifity handling
*/
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_INF):
@@ -91,7 +91,7 @@
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_DNORM):
return x;
- /* Zero handeling
+ /* Zero handling
*/
case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_ZERO):
diff -Nru a/arch/mips/mips-boards/generic/pci.c b/arch/mips/mips-boards/generic/pci.c
--- a/arch/mips/mips-boards/generic/pci.c Tue Mar 4 19:30:07 2003
+++ b/arch/mips/mips-boards/generic/pci.c Tue Mar 4 19:30:07 2003
@@ -81,7 +81,7 @@
if (intr & (GT_INTRCAUSE_MASABORT0_BIT | GT_INTRCAUSE_TARABORT0_BIT))
{
- /* Error occured */
+ /* Error occurred */
/* Clear bits */
GT_WRITE( GT_INTRCAUSE_OFS, ~(GT_INTRCAUSE_MASABORT0_BIT |
diff -Nru a/arch/mips64/kernel/process.c b/arch/mips64/kernel/process.c
--- a/arch/mips64/kernel/process.c Tue Mar 4 19:30:05 2003
+++ b/arch/mips64/kernel/process.c Tue Mar 4 19:30:05 2003
@@ -105,7 +105,7 @@
p->thread.reg31 = (unsigned long) ret_from_fork;
/*
- * New tasks loose permission to use the fpu. This accelerates context
+ * New tasks lose permission to use the fpu. This accelerates context
* switching for most programs since they don't use the fpu.
*/
p->thread.cp0_status = read_32bit_cp0_register(CP0_STATUS) &
diff -Nru a/arch/mips64/kernel/r4k_switch.S b/arch/mips64/kernel/r4k_switch.S
--- a/arch/mips64/kernel/r4k_switch.S Tue Mar 4 19:30:04 2003
+++ b/arch/mips64/kernel/r4k_switch.S Tue Mar 4 19:30:04 2003
@@ -79,7 +79,7 @@
beqz a0, 2f # Save floating point state
nor t3, zero, t3
- ld t1, ST_OFF(a0) # last thread looses fpu
+ ld t1, ST_OFF(a0) # last thread loses fpu
and t1, t3
sd t1, ST_OFF(a0)
sll t2, t1, 5
diff -Nru a/arch/mips64/kernel/smp.c b/arch/mips64/kernel/smp.c
--- a/arch/mips64/kernel/smp.c Tue Mar 4 19:30:11 2003
+++ b/arch/mips64/kernel/smp.c Tue Mar 4 19:30:11 2003
@@ -195,8 +195,7 @@
void flush_tlb_all(void)
{
- smp_call_function(flush_tlb_all_ipi, 0, 1, 1);
- _flush_tlb_all();
+ on_each_cpu(flush_tlb_all_ipi, 0, 1, 1);
}
static void flush_tlb_mm_ipi(void *mm)
@@ -219,6 +218,8 @@
void flush_tlb_mm(struct mm_struct *mm)
{
+ preempt_disable();
+
if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) {
smp_call_function(flush_tlb_mm_ipi, (void *)mm, 1, 1);
} else {
@@ -228,6 +229,8 @@
CPU_CONTEXT(i, mm) = 0;
}
_flush_tlb_mm(mm);
+
+ preempt_enable();
}
struct flush_tlb_data {
@@ -246,6 +249,8 @@
void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
{
+ preempt_disable();
+
if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) {
struct flush_tlb_data fd;
@@ -260,6 +265,8 @@
CPU_CONTEXT(i, mm) = 0;
}
_flush_tlb_range(mm, start, end);
+
+ preempt_enable();
}
static void flush_tlb_page_ipi(void *info)
@@ -271,6 +278,8 @@
void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
{
+ preempt_disable();
+
if ((atomic_read(&vma->vm_mm->mm_users) != 1) || (current->mm != vma->vm_mm)) {
struct flush_tlb_data fd;
@@ -284,5 +293,7 @@
CPU_CONTEXT(i, vma->vm_mm) = 0;
}
_flush_tlb_page(vma, page);
+
+ preempt_enable();
}
diff -Nru a/arch/mips64/kernel/traps.c b/arch/mips64/kernel/traps.c
--- a/arch/mips64/kernel/traps.c Tue Mar 4 19:30:11 2003
+++ b/arch/mips64/kernel/traps.c Tue Mar 4 19:30:11 2003
@@ -497,7 +497,7 @@
/* Some firmware leaves the BEV flag set, clear it. */
set_cp0_status(ST0_BEV, 0);
- /* Copy the generic exception handler code to it's final destination. */
+ /* Copy the generic exception handler code to its final destination. */
memcpy((void *)(KSEG0 + 0x100), &except_vec2_generic, 0x80);
memcpy((void *)(KSEG0 + 0x180), &except_vec3_generic, 0x80);
diff -Nru a/arch/mips64/math-emu/dp_add.c b/arch/mips64/math-emu/dp_add.c
--- a/arch/mips64/math-emu/dp_add.c Tue Mar 4 19:30:14 2003
+++ b/arch/mips64/math-emu/dp_add.c Tue Mar 4 19:30:14 2003
@@ -73,7 +73,7 @@
return x;
- /* Inifity handeling
+ /* Inifity handling
*/
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_INF):
@@ -92,7 +92,7 @@
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_DNORM):
return x;
- /* Zero handeling
+ /* Zero handling
*/
case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_ZERO):
diff -Nru a/arch/mips64/math-emu/dp_div.c b/arch/mips64/math-emu/dp_div.c
--- a/arch/mips64/math-emu/dp_div.c Tue Mar 4 19:30:03 2003
+++ b/arch/mips64/math-emu/dp_div.c Tue Mar 4 19:30:03 2003
@@ -72,7 +72,7 @@
return x;
- /* Infinity handeling
+ /* Infinity handling
*/
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_INF):
@@ -89,7 +89,7 @@
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_DNORM):
return ieee754dp_inf(xs ^ ys);
- /* Zero handeling
+ /* Zero handling
*/
case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_ZERO):
diff -Nru a/arch/mips64/math-emu/dp_mul.c b/arch/mips64/math-emu/dp_mul.c
--- a/arch/mips64/math-emu/dp_mul.c Tue Mar 4 19:30:14 2003
+++ b/arch/mips64/math-emu/dp_mul.c Tue Mar 4 19:30:14 2003
@@ -72,7 +72,7 @@
return x;
- /* Infinity handeling */
+ /* Infinity handling */
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_ZERO):
case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_INF):
diff -Nru a/arch/mips64/math-emu/dp_sub.c b/arch/mips64/math-emu/dp_sub.c
--- a/arch/mips64/math-emu/dp_sub.c Tue Mar 4 19:30:08 2003
+++ b/arch/mips64/math-emu/dp_sub.c Tue Mar 4 19:30:08 2003
@@ -72,7 +72,7 @@
return x;
- /* Inifity handeling
+ /* Inifity handling
*/
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_INF):
@@ -91,7 +91,7 @@
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_DNORM):
return x;
- /* Zero handeling
+ /* Zero handling
*/
case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_ZERO):
diff -Nru a/arch/mips64/math-emu/ieee754dp.c b/arch/mips64/math-emu/ieee754dp.c
--- a/arch/mips64/math-emu/ieee754dp.c Tue Mar 4 19:30:13 2003
+++ b/arch/mips64/math-emu/ieee754dp.c Tue Mar 4 19:30:13 2003
@@ -99,14 +99,14 @@
}
-/* generate a normal/denormal number with over,under handeling
+/* generate a normal/denormal number with over,under handling
* sn is sign
* xe is an unbiased exponent
* xm is 3bit extended precision value.
*/
ieee754dp ieee754dp_format(int sn, int xe, unsigned long long xm)
{
- assert(xm); /* we dont gen exact zeros (probably should) */
+ assert(xm); /* we don't gen exact zeros (probably should) */
assert((xm >> (DP_MBITS + 1 + 3)) == 0); /* no execess */
assert(xm & (DP_HIDDEN_BIT << 3));
diff -Nru a/arch/mips64/math-emu/ieee754sp.c b/arch/mips64/math-emu/ieee754sp.c
--- a/arch/mips64/math-emu/ieee754sp.c Tue Mar 4 19:30:10 2003
+++ b/arch/mips64/math-emu/ieee754sp.c Tue Mar 4 19:30:10 2003
@@ -100,14 +100,14 @@
}
-/* generate a normal/denormal number with over,under handeling
+/* generate a normal/denormal number with over,under handling
* sn is sign
* xe is an unbiased exponent
* xm is 3bit extended precision value.
*/
ieee754sp ieee754sp_format(int sn, int xe, unsigned xm)
{
- assert(xm); /* we dont gen exact zeros (probably should) */
+ assert(xm); /* we don't gen exact zeros (probably should) */
assert((xm >> (SP_MBITS + 1 + 3)) == 0); /* no execess */
assert(xm & (SP_HIDDEN_BIT << 3));
diff -Nru a/arch/mips64/math-emu/sp_add.c b/arch/mips64/math-emu/sp_add.c
--- a/arch/mips64/math-emu/sp_add.c Tue Mar 4 19:30:05 2003
+++ b/arch/mips64/math-emu/sp_add.c Tue Mar 4 19:30:05 2003
@@ -72,7 +72,7 @@
return x;
- /* Inifity handeling
+ /* Inifity handling
*/
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_INF):
@@ -91,7 +91,7 @@
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_DNORM):
return x;
- /* Zero handeling
+ /* Zero handling
*/
case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_ZERO):
diff -Nru a/arch/mips64/math-emu/sp_div.c b/arch/mips64/math-emu/sp_div.c
--- a/arch/mips64/math-emu/sp_div.c Tue Mar 4 19:30:14 2003
+++ b/arch/mips64/math-emu/sp_div.c Tue Mar 4 19:30:14 2003
@@ -72,7 +72,7 @@
return x;
- /* Infinity handeling
+ /* Infinity handling
*/
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_INF):
@@ -89,7 +89,7 @@
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_DNORM):
return ieee754sp_inf(xs ^ ys);
- /* Zero handeling
+ /* Zero handling
*/
case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_ZERO):
diff -Nru a/arch/mips64/math-emu/sp_fdp.c b/arch/mips64/math-emu/sp_fdp.c
--- a/arch/mips64/math-emu/sp_fdp.c Tue Mar 4 19:30:13 2003
+++ b/arch/mips64/math-emu/sp_fdp.c Tue Mar 4 19:30:13 2003
@@ -49,7 +49,7 @@
case IEEE754_CLASS_ZERO:
return ieee754sp_zero(xs);
case IEEE754_CLASS_DNORM:
- /* cant possibly be sp representable */
+ /* can't possibly be sp representable */
SETCX(IEEE754_UNDERFLOW);
return ieee754sp_xcpt(ieee754sp_zero(xs), "fdp", x);
case IEEE754_CLASS_NORM:
diff -Nru a/arch/mips64/math-emu/sp_mul.c b/arch/mips64/math-emu/sp_mul.c
--- a/arch/mips64/math-emu/sp_mul.c Tue Mar 4 19:30:05 2003
+++ b/arch/mips64/math-emu/sp_mul.c Tue Mar 4 19:30:05 2003
@@ -72,7 +72,7 @@
return x;
- /* Infinity handeling */
+ /* Infinity handling */
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_ZERO):
case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_INF):
diff -Nru a/arch/mips64/math-emu/sp_sub.c b/arch/mips64/math-emu/sp_sub.c
--- a/arch/mips64/math-emu/sp_sub.c Tue Mar 4 19:30:08 2003
+++ b/arch/mips64/math-emu/sp_sub.c Tue Mar 4 19:30:08 2003
@@ -72,7 +72,7 @@
return x;
- /* Inifity handeling
+ /* Inifity handling
*/
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_INF):
@@ -91,7 +91,7 @@
case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_DNORM):
return x;
- /* Zero handeling
+ /* Zero handling
*/
case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_ZERO):
diff -Nru a/arch/mips64/mips-boards/generic/pci.c b/arch/mips64/mips-boards/generic/pci.c
--- a/arch/mips64/mips-boards/generic/pci.c Tue Mar 4 19:30:04 2003
+++ b/arch/mips64/mips-boards/generic/pci.c Tue Mar 4 19:30:04 2003
@@ -87,7 +87,7 @@
if (intr & (GT_INTRCAUSE_MASABORT0_BIT | GT_INTRCAUSE_TARABORT0_BIT))
{
- /* Error occured */
+ /* Error occurred */
/* Clear bits */
GT_WRITE( GT_INTRCAUSE_OFS, ~(GT_INTRCAUSE_MASABORT0_BIT |
diff -Nru a/arch/mips64/sgi-ip27/ip27-nmi.c b/arch/mips64/sgi-ip27/ip27-nmi.c
--- a/arch/mips64/sgi-ip27/ip27-nmi.c Tue Mar 4 19:30:03 2003
+++ b/arch/mips64/sgi-ip27/ip27-nmi.c Tue Mar 4 19:30:03 2003
@@ -127,7 +127,7 @@
* This is for 2 reasons:
* - sometimes a MMSC fail to NMI all cpus.
* - on 512p SN0 system, the MMSC will only send NMIs to
- * half the cpus. Unfortunately, we dont know which cpus may be
+ * half the cpus. Unfortunately, we don't know which cpus may be
* NMIed - it depends on how the site chooses to configure.
*
* Note: it has been measure that it takes the MMSC up to 2.3 secs to
diff -Nru a/arch/mips64/sgi-ip27/ip27-pci-dma.c b/arch/mips64/sgi-ip27/ip27-pci-dma.c
--- a/arch/mips64/sgi-ip27/ip27-pci-dma.c Tue Mar 4 19:30:11 2003
+++ b/arch/mips64/sgi-ip27/ip27-pci-dma.c Tue Mar 4 19:30:11 2003
@@ -74,7 +74,7 @@
* must match what was provided for in a previous pci_map_single call. All
* other usages are undefined.
*
- * After this call, reads by the cpu to the buffer are guarenteed to see
+ * After this call, reads by the cpu to the buffer are guaranteed to see
* whatever the device wrote there.
*/
void pci_unmap_single(struct pci_dev *hwdev, dma_addr_t dma_addr,
diff -Nru a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c
--- a/arch/parisc/kernel/cache.c Tue Mar 4 19:30:08 2003
+++ b/arch/parisc/kernel/cache.c Tue Mar 4 19:30:08 2003
@@ -39,8 +39,7 @@
void
flush_data_cache(void)
{
- smp_call_function((void (*)(void *))flush_data_cache_local, NULL, 1, 1);
- flush_data_cache_local();
+ on_each_cpu((void (*)(void *))flush_data_cache_local, NULL, 1, 1);
}
#endif
diff -Nru a/arch/parisc/kernel/irq.c b/arch/parisc/kernel/irq.c
--- a/arch/parisc/kernel/irq.c Tue Mar 4 19:30:07 2003
+++ b/arch/parisc/kernel/irq.c Tue Mar 4 19:30:07 2003
@@ -61,20 +61,17 @@
static spinlock_t irq_lock = SPIN_LOCK_UNLOCKED; /* protect IRQ regions */
-#ifdef CONFIG_SMP
static void cpu_set_eiem(void *info)
{
set_eiem((unsigned long) info);
}
-#endif
static inline void disable_cpu_irq(void *unused, int irq)
{
unsigned long eirr_bit = EIEM_MASK(irq);
cpu_eiem &= ~eirr_bit;
- set_eiem(cpu_eiem);
- smp_call_function(cpu_set_eiem, (void *) cpu_eiem, 1, 1);
+ on_each_cpu(cpu_set_eiem, (void *) cpu_eiem, 1, 1);
}
static void enable_cpu_irq(void *unused, int irq)
@@ -83,8 +80,7 @@
mtctl(eirr_bit, 23); /* clear EIRR bit before unmasking */
cpu_eiem |= eirr_bit;
- smp_call_function(cpu_set_eiem, (void *) cpu_eiem, 1, 1);
- set_eiem(cpu_eiem);
+ on_each_cpu(cpu_set_eiem, (void *) cpu_eiem, 1, 1);
}
/* mask and disable are the same at the CPU level
@@ -100,8 +96,7 @@
** handle *any* unmasked pending interrupts.
** ie We don't need to check for pending interrupts here.
*/
- smp_call_function(cpu_set_eiem, (void *) cpu_eiem, 1, 1);
- set_eiem(cpu_eiem);
+ on_each_cpu(cpu_set_eiem, (void *) cpu_eiem, 1, 1);
}
/*
@@ -349,7 +344,7 @@
/*
-** The alloc process needs to accept a parameter to accomodate limitations
+** The alloc process needs to accept a parameter to accommodate limitations
** of the HW/SW which use these bits:
** Legacy PA I/O (GSC/NIO): 5 bits (architected EIM register)
** V-class (EPIC): 6 bits
diff -Nru a/arch/parisc/kernel/perf.c b/arch/parisc/kernel/perf.c
--- a/arch/parisc/kernel/perf.c Tue Mar 4 19:30:07 2003
+++ b/arch/parisc/kernel/perf.c Tue Mar 4 19:30:07 2003
@@ -255,7 +255,7 @@
}
/*
- * Open the device and initialize all of it's memory. The device is only
+ * Open the device and initialize all of its memory. The device is only
* opened once, but can be "queried" by multiple processes that know its
* file descriptor.
*/
diff -Nru a/arch/parisc/kernel/perf_images.h b/arch/parisc/kernel/perf_images.h
--- a/arch/parisc/kernel/perf_images.h Tue Mar 4 19:30:04 2003
+++ b/arch/parisc/kernel/perf_images.h Tue Mar 4 19:30:04 2003
@@ -1556,7 +1556,7 @@
* IRTN_AV fires twice for every I-cache miss returning from RIB to the IFU.
* It will not fire if a second I-cache miss is issued from the IFU to RIB
* before the first returns. Therefore, if the IRTN_AV count is much less
- * than 2x the ICORE_AV count, many speculative I-cache misses are occuring
+ * than 2x the ICORE_AV count, many speculative I-cache misses are occurring
* which are "discovered" to be incorrect fairly quickly.
* The ratio of I-cache miss transactions on Runway to the ICORE_AV count is
* a measure of the effectiveness of instruction prefetching. This ratio
diff -Nru a/arch/parisc/kernel/ptrace.c b/arch/parisc/kernel/ptrace.c
--- a/arch/parisc/kernel/ptrace.c Tue Mar 4 19:30:04 2003
+++ b/arch/parisc/kernel/ptrace.c Tue Mar 4 19:30:04 2003
@@ -242,7 +242,7 @@
*
* Allow writing to Nullify, Divide-step-correction,
* and carry/borrow bits.
- * BEWARE, if you set N, and then single step, it wont
+ * BEWARE, if you set N, and then single step, it won't
* stop on the nullified instruction.
*/
DBG(("sys_ptrace(POKEUSR, %d, %lx, %lx)\n",
diff -Nru a/arch/parisc/kernel/smp.c b/arch/parisc/kernel/smp.c
--- a/arch/parisc/kernel/smp.c Tue Mar 4 19:30:14 2003
+++ b/arch/parisc/kernel/smp.c Tue Mar 4 19:30:14 2003
@@ -401,7 +401,7 @@
__setup("maxcpus=", maxcpus);
/*
- * Flush all other CPU's tlb and then mine. Do this with smp_call_function()
+ * Flush all other CPU's tlb and then mine. Do this with on_each_cpu()
* as we want to ensure all TLB's flushed before proceeding.
*/
@@ -410,8 +410,7 @@
void
smp_flush_tlb_all(void)
{
- smp_call_function((void (*)(void *))flush_tlb_all_local, NULL, 1, 1);
- flush_tlb_all_local();
+ on_each_cpu((void (*)(void *))flush_tlb_all_local, NULL, 1, 1);
}
diff -Nru a/arch/parisc/kernel/time.c b/arch/parisc/kernel/time.c
--- a/arch/parisc/kernel/time.c Tue Mar 4 19:30:07 2003
+++ b/arch/parisc/kernel/time.c Tue Mar 4 19:30:07 2003
@@ -32,7 +32,7 @@
#include
-u64 jiffies_64;
+u64 jiffies_64 = INITIAL_JIFFIES;
/* xtime and wall_jiffies keep wall-clock time */
extern unsigned long wall_jiffies;
diff -Nru a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
--- a/arch/parisc/mm/init.c Tue Mar 4 19:30:08 2003
+++ b/arch/parisc/mm/init.c Tue Mar 4 19:30:08 2003
@@ -974,8 +974,7 @@
do_recycle++;
}
spin_unlock(&sid_lock);
- smp_call_function((void (*)(void *))flush_tlb_all_local, NULL, 1, 1);
- flush_tlb_all_local();
+ on_each_cpu((void (*)(void *))flush_tlb_all_local, NULL, 1, 1);
if (do_recycle) {
spin_lock(&sid_lock);
recycle_sids(recycle_ndirty,recycle_dirty_array);
diff -Nru a/arch/ppc/4xx_io/serial_sicc.c b/arch/ppc/4xx_io/serial_sicc.c
--- a/arch/ppc/4xx_io/serial_sicc.c Tue Mar 4 19:30:03 2003
+++ b/arch/ppc/4xx_io/serial_sicc.c Tue Mar 4 19:30:03 2003
@@ -139,7 +139,7 @@
#define _LSR_RX_ERR (_LSR_LB_BREAK | _LSR_FE_MASK | _LSR_OE_MASK | \
_LSR_PE_MASK )
-/* serial port reciever command register */
+/* serial port receiver command register */
#define _RCR_ER_MASK 0x80 /* enable receiver mask */
#define _RCR_DME_MASK 0x60 /* dma mode */
diff -Nru a/arch/ppc/8xx_io/cs4218_tdm.c b/arch/ppc/8xx_io/cs4218_tdm.c
--- a/arch/ppc/8xx_io/cs4218_tdm.c Tue Mar 4 19:30:04 2003
+++ b/arch/ppc/8xx_io/cs4218_tdm.c Tue Mar 4 19:30:04 2003
@@ -2495,7 +2495,7 @@
cp->cp_simode &= ~0x00000fff;
/* Enable common receive/transmit clock pins, use IDL format.
- * Sync on falling edge, transmit rising clock, recieve falling
+ * Sync on falling edge, transmit rising clock, receive falling
* clock, delay 1 bit on both Tx and Rx. Common Tx/Rx clocks and
* sync.
* Connect SMC2 to TSA.
diff -Nru a/arch/ppc/kernel/l2cr.S b/arch/ppc/kernel/l2cr.S
--- a/arch/ppc/kernel/l2cr.S Tue Mar 4 19:30:09 2003
+++ b/arch/ppc/kernel/l2cr.S Tue Mar 4 19:30:09 2003
@@ -136,7 +136,7 @@
/**** Might be a good idea to set L2DO here - to prevent instructions
from getting into the cache. But since we invalidate
the next time we enable the cache it doesn't really matter.
- Don't do this unless you accomodate all processor variations.
+ Don't do this unless you accommodate all processor variations.
The bit moved on the 7450.....
****/
diff -Nru a/arch/ppc/kernel/temp.c b/arch/ppc/kernel/temp.c
--- a/arch/ppc/kernel/temp.c Tue Mar 4 19:30:09 2003
+++ b/arch/ppc/kernel/temp.c Tue Mar 4 19:30:09 2003
@@ -194,10 +194,7 @@
/* schedule ourselves to be run again */
mod_timer(&tau_timer, jiffies + shrink_timer) ;
-#ifdef CONFIG_SMP
- smp_call_function(tau_timeout, NULL, 1, 0);
-#endif
- tau_timeout(NULL);
+ on_each_cpu(tau_timeout, NULL, 1, 0);
}
/*
@@ -239,10 +236,7 @@
tau_timer.expires = jiffies + shrink_timer;
add_timer(&tau_timer);
-#ifdef CONFIG_SMP
- smp_call_function(TAU_init_smp, NULL, 1, 0);
-#endif
- TAU_init_smp(NULL);
+ on_each_cpu(TAU_init_smp, NULL, 1, 0);
printk("Thermal assist unit ");
#ifdef CONFIG_TAU_INT
diff -Nru a/arch/ppc/kernel/time.c b/arch/ppc/kernel/time.c
--- a/arch/ppc/kernel/time.c Tue Mar 4 19:30:12 2003
+++ b/arch/ppc/kernel/time.c Tue Mar 4 19:30:12 2003
@@ -68,7 +68,7 @@
#include
/* XXX false sharing with below? */
-u64 jiffies_64;
+u64 jiffies_64 = INITIAL_JIFFIES;
unsigned long disarm_decr[NR_CPUS];
diff -Nru a/arch/ppc/mm/mem_pieces.c b/arch/ppc/mm/mem_pieces.c
--- a/arch/ppc/mm/mem_pieces.c Tue Mar 4 19:30:04 2003
+++ b/arch/ppc/mm/mem_pieces.c Tue Mar 4 19:30:04 2003
@@ -1,6 +1,6 @@
/*
* Copyright (c) 1996 Paul Mackerras
- * Changes to accomodate Power Macintoshes.
+ * Changes to accommodate Power Macintoshes.
* Cort Dougan
* Rewrites.
* Grant Erickson
diff -Nru a/arch/ppc/mm/mem_pieces.h b/arch/ppc/mm/mem_pieces.h
--- a/arch/ppc/mm/mem_pieces.h Tue Mar 4 19:30:03 2003
+++ b/arch/ppc/mm/mem_pieces.h Tue Mar 4 19:30:03 2003
@@ -1,6 +1,6 @@
/*
* Copyright (c) 1996 Paul Mackerras
- * Changes to accomodate Power Macintoshes.
+ * Changes to accommodate Power Macintoshes.
* Cort Dougan
* Rewrites.
* Grant Erickson
diff -Nru a/arch/ppc/platforms/4xx/ibmstbx25.h b/arch/ppc/platforms/4xx/ibmstbx25.h
--- a/arch/ppc/platforms/4xx/ibmstbx25.h Tue Mar 4 19:30:07 2003
+++ b/arch/ppc/platforms/4xx/ibmstbx25.h Tue Mar 4 19:30:07 2003
@@ -164,7 +164,7 @@
#define IBM_CPM_CPU 0x10000000 /* PPC405B3 clock control */
#define IBM_CPM_AUD 0x08000000 /* Audio Decoder */
#define IBM_CPM_EBIU 0x04000000 /* External Bus Interface Unit */
-#define IBM_CPM_IRR 0x02000000 /* Infrared reciever */
+#define IBM_CPM_IRR 0x02000000 /* Infrared receiver */
#define IBM_CPM_DMA 0x01000000 /* DMA controller */
#define IBM_CPM_UART2 0x00200000 /* Serial Control Port */
#define IBM_CPM_UART1 0x00100000 /* Serial 1 / Infrared */
diff -Nru a/arch/ppc/platforms/pmac_feature.c b/arch/ppc/platforms/pmac_feature.c
--- a/arch/ppc/platforms/pmac_feature.c Tue Mar 4 19:30:13 2003
+++ b/arch/ppc/platforms/pmac_feature.c Tue Mar 4 19:30:13 2003
@@ -50,7 +50,7 @@
/*
* We use a single global lock to protect accesses. Each driver has
- * to take care of it's own locking
+ * to take care of its own locking
*/
static spinlock_t feature_lock __pmacdata = SPIN_LOCK_UNLOCKED;
diff -Nru a/arch/ppc/syslib/mpc10x_common.c b/arch/ppc/syslib/mpc10x_common.c
--- a/arch/ppc/syslib/mpc10x_common.c Tue Mar 4 19:30:05 2003
+++ b/arch/ppc/syslib/mpc10x_common.c Tue Mar 4 19:30:05 2003
@@ -109,7 +109,7 @@
return -1;
}
- /* Make sure its a supported bridge */
+ /* Make sure it's a supported bridge */
early_read_config_dword(hose,
0,
PCI_DEVFN(0,0),
diff -Nru a/arch/ppc64/boot/addRamDisk.c b/arch/ppc64/boot/addRamDisk.c
--- a/arch/ppc64/boot/addRamDisk.c Tue Mar 4 19:30:09 2003
+++ b/arch/ppc64/boot/addRamDisk.c Tue Mar 4 19:30:09 2003
@@ -154,7 +154,7 @@
/* Process the Sysmap file to determine where _end is */
sysmapPages = sysmapLen / 4096;
- /* read the whole file line by line, expect that it doesnt fail */
+ /* read the whole file line by line, expect that it doesn't fail */
while ( fgets(inbuf, 4096, sysmap) ) ;
/* search for _end in the last page of the system map */
ptr_end = strstr(inbuf, " _end");
diff -Nru a/arch/ppc64/boot/addSystemMap.c b/arch/ppc64/boot/addSystemMap.c
--- a/arch/ppc64/boot/addSystemMap.c Tue Mar 4 19:30:08 2003
+++ b/arch/ppc64/boot/addSystemMap.c Tue Mar 4 19:30:08 2003
@@ -146,7 +146,7 @@
/* Process the Sysmap file to determine the true end of the kernel */
sysmapPages = sysmapLen / 4096;
printf("System map pages to copy = %ld\n", sysmapPages);
- /* read the whole file line by line, expect that it doesnt fail */
+ /* read the whole file line by line, expect that it doesn't fail */
while ( fgets(inbuf, 4096, sysmap) ) ;
/* search for _end in the last page of the system map */
ptr_end = strstr(inbuf, " _end");
diff -Nru a/arch/ppc64/kernel/head.S b/arch/ppc64/kernel/head.S
--- a/arch/ppc64/kernel/head.S Tue Mar 4 19:30:09 2003
+++ b/arch/ppc64/kernel/head.S Tue Mar 4 19:30:09 2003
@@ -109,7 +109,7 @@
.llong 0x0
/* Secondary processors write this value with their cpu # */
- /* after they enter the spin loop immediatly below. */
+ /* after they enter the spin loop immediately below. */
.globl __secondary_hold_acknowledge
__secondary_hold_acknowledge:
.llong 0x0
diff -Nru a/arch/ppc64/kernel/ioctl32.c b/arch/ppc64/kernel/ioctl32.c
--- a/arch/ppc64/kernel/ioctl32.c Tue Mar 4 19:30:14 2003
+++ b/arch/ppc64/kernel/ioctl32.c Tue Mar 4 19:30:14 2003
@@ -3315,7 +3315,7 @@
*
* But how to keep track of these kernel buffers? We'd need to either
* keep track of them in some table _or_ know about usbdevicefs internals
- * (ie. the exact layout of it's file private, which is actually defined
+ * (ie. the exact layout of its file private, which is actually defined
* in linux/usbdevice_fs.h, the layout of the async queues are private to
* devio.c)
*
diff -Nru a/arch/ppc64/kernel/lmb.c b/arch/ppc64/kernel/lmb.c
--- a/arch/ppc64/kernel/lmb.c Tue Mar 4 19:30:09 2003
+++ b/arch/ppc64/kernel/lmb.c Tue Mar 4 19:30:09 2003
@@ -73,7 +73,7 @@
_lmb->reserved.cnt = 1;
}
-/* This is only used here, it doesnt deserve to be in bitops.h */
+/* This is only used here, it doesn't deserve to be in bitops.h */
static __inline__ long cnt_trailing_zeros(unsigned long mask)
{
long cnt;
diff -Nru a/arch/ppc64/kernel/pSeries_lpar.c b/arch/ppc64/kernel/pSeries_lpar.c
--- a/arch/ppc64/kernel/pSeries_lpar.c Tue Mar 4 19:30:14 2003
+++ b/arch/ppc64/kernel/pSeries_lpar.c Tue Mar 4 19:30:14 2003
@@ -461,7 +461,7 @@
return -1;
/*
- * Since we try and ioremap PHBs we dont own, the pte insert
+ * Since we try and ioremap PHBs we don't own, the pte insert
* will fail. However we must catch the failure in hash_page
* or we will loop forever, so return -2 in this case.
*/
@@ -485,7 +485,7 @@
for (i = 0; i < HPTES_PER_GROUP; i++) {
- /* dont remove a bolted entry */
+ /* don't remove a bolted entry */
lpar_rc = plpar_pte_remove(H_ANDCOND, hpte_group + slot_offset,
(0x1UL << 4), &dummy1, &dummy2);
diff -Nru a/arch/ppc64/kernel/pci_dn.c b/arch/ppc64/kernel/pci_dn.c
--- a/arch/ppc64/kernel/pci_dn.c Tue Mar 4 19:30:14 2003
+++ b/arch/ppc64/kernel/pci_dn.c Tue Mar 4 19:30:14 2003
@@ -150,7 +150,7 @@
}
/* This is the "slow" path for looking up a device_node from a
- * pci_dev. It will hunt for the device under it's parent's
+ * pci_dev. It will hunt for the device under its parent's
* phb and then update sysdata for a future fastpath.
*
* It may also do fixups on the actual device since this happens
diff -Nru a/arch/ppc64/kernel/ras.c b/arch/ppc64/kernel/ras.c
--- a/arch/ppc64/kernel/ras.c Tue Mar 4 19:30:05 2003
+++ b/arch/ppc64/kernel/ras.c Tue Mar 4 19:30:05 2003
@@ -94,7 +94,7 @@
/*
* Handle power subsystem events (EPOW).
*
- * Presently we just log the event has occured. This should be fixed
+ * Presently we just log the event has occurred. This should be fixed
* to examine the type of power failure and take appropriate action where
* the time horizon permits something useful to be done.
*/
diff -Nru a/arch/ppc64/kernel/smp.c b/arch/ppc64/kernel/smp.c
--- a/arch/ppc64/kernel/smp.c Tue Mar 4 19:30:12 2003
+++ b/arch/ppc64/kernel/smp.c Tue Mar 4 19:30:12 2003
@@ -51,7 +51,7 @@
int smp_threads_ready = 0;
unsigned long cache_decay_ticks;
-/* initialised so it doesnt end up in bss */
+/* initialised so it doesn't end up in bss */
unsigned long cpu_online_map = 0;
static struct smp_ops_t *smp_ops;
diff -Nru a/arch/ppc64/kernel/time.c b/arch/ppc64/kernel/time.c
--- a/arch/ppc64/kernel/time.c Tue Mar 4 19:30:13 2003
+++ b/arch/ppc64/kernel/time.c Tue Mar 4 19:30:13 2003
@@ -65,7 +65,7 @@
void smp_local_timer_interrupt(struct pt_regs *);
-u64 jiffies_64;
+u64 jiffies_64 = INITIAL_JIFFIES;
/* keep track of when we need to update the rtc */
time_t last_rtc_update;
diff -Nru a/arch/ppc64/xmon/xmon.c b/arch/ppc64/xmon/xmon.c
--- a/arch/ppc64/xmon/xmon.c Tue Mar 4 19:30:13 2003
+++ b/arch/ppc64/xmon/xmon.c Tue Mar 4 19:30:13 2003
@@ -2072,7 +2072,7 @@
int instr;
int num_parms;
- /* dont look for traceback table in userspace */
+ /* don't look for traceback table in userspace */
if (codeaddr < PAGE_OFFSET)
return 0;
diff -Nru a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
--- a/arch/s390/kernel/smp.c Tue Mar 4 19:30:03 2003
+++ b/arch/s390/kernel/smp.c Tue Mar 4 19:30:03 2003
@@ -228,8 +228,7 @@
void machine_restart_smp(char * __unused)
{
cpu_restart_map = cpu_online_map;
- smp_call_function(do_machine_restart, NULL, 0, 0);
- do_machine_restart(NULL);
+ on_each_cpu(do_machine_restart, NULL, 0, 0);
}
static void do_machine_halt(void * __unused)
@@ -247,8 +246,7 @@
void machine_halt_smp(void)
{
- smp_call_function(do_machine_halt, NULL, 0, 0);
- do_machine_halt(NULL);
+ on_each_cpu(do_machine_halt, NULL, 0, 0);
}
static void do_machine_power_off(void * __unused)
@@ -266,8 +264,7 @@
void machine_power_off_smp(void)
{
- smp_call_function(do_machine_power_off, NULL, 0, 0);
- do_machine_power_off(NULL);
+ on_each_cpu(do_machine_power_off, NULL, 0, 0);
}
/*
@@ -339,8 +336,7 @@
void smp_ptlb_all(void)
{
- smp_call_function(smp_ptlb_callback, NULL, 0, 1);
- local_flush_tlb();
+ on_each_cpu(smp_ptlb_callback, NULL, 0, 1);
}
/*
@@ -400,8 +396,10 @@
parms.end_ctl = cr;
parms.orvals[cr] = 1 << bit;
parms.andvals[cr] = 0xFFFFFFFF;
+ preempt_disable();
smp_call_function(smp_ctl_bit_callback, &parms, 0, 1);
__ctl_set_bit(cr, bit);
+ preempt_enable();
}
/*
@@ -414,8 +412,10 @@
parms.end_ctl = cr;
parms.orvals[cr] = 0x00000000;
parms.andvals[cr] = ~(1 << bit);
+ preempt_disable();
smp_call_function(smp_ctl_bit_callback, &parms, 0, 1);
__ctl_clear_bit(cr, bit);
+ preempt_enable();
}
/*
diff -Nru a/arch/s390/kernel/time.c b/arch/s390/kernel/time.c
--- a/arch/s390/kernel/time.c Tue Mar 4 19:30:03 2003
+++ b/arch/s390/kernel/time.c Tue Mar 4 19:30:03 2003
@@ -46,7 +46,7 @@
#define TICK_SIZE tick
-u64 jiffies_64;
+u64 jiffies_64 = INITIAL_JIFFIES;
static ext_int_info_t ext_int_info_timer;
static uint64_t xtime_cc;
diff -Nru a/arch/s390x/kernel/exec32.c b/arch/s390x/kernel/exec32.c
--- a/arch/s390x/kernel/exec32.c Tue Mar 4 19:30:03 2003
+++ b/arch/s390x/kernel/exec32.c Tue Mar 4 19:30:03 2003
@@ -4,7 +4,7 @@
* Copyright (C) 2000 IBM Deutschland Entwicklung GmbH, IBM Corporation
* Author(s): Gerhard Tonn (ton@de.ibm.com)
*
- * Seperated from binfmt_elf32.c to reduce exports for module enablement.
+ * Separated from binfmt_elf32.c to reduce exports for module enablement.
*
*/
diff -Nru a/arch/s390x/kernel/smp.c b/arch/s390x/kernel/smp.c
--- a/arch/s390x/kernel/smp.c Tue Mar 4 19:30:09 2003
+++ b/arch/s390x/kernel/smp.c Tue Mar 4 19:30:09 2003
@@ -227,8 +227,7 @@
void machine_restart_smp(char * __unused)
{
cpu_restart_map = cpu_online_map;
- smp_call_function(do_machine_restart, NULL, 0, 0);
- do_machine_restart(NULL);
+ on_each_cpu(do_machine_restart, NULL, 0, 0);
}
static void do_machine_halt(void * __unused)
@@ -246,8 +245,7 @@
void machine_halt_smp(void)
{
- smp_call_function(do_machine_halt, NULL, 0, 0);
- do_machine_halt(NULL);
+ on_each_cpu(do_machine_halt, NULL, 0, 0);
}
static void do_machine_power_off(void * __unused)
@@ -265,8 +263,7 @@
void machine_power_off_smp(void)
{
- smp_call_function(do_machine_power_off, NULL, 0, 0);
- do_machine_power_off(NULL);
+ on_each_cpu(do_machine_power_off, NULL, 0, 0);
}
/*
@@ -383,8 +380,10 @@
parms.end_ctl = cr;
parms.orvals[cr] = 1 << bit;
parms.andvals[cr] = -1L;
+ preempt_disable();
smp_call_function(smp_ctl_bit_callback, &parms, 0, 1);
__ctl_set_bit(cr, bit);
+ preempt_enable();
}
/*
@@ -397,8 +396,10 @@
parms.end_ctl = cr;
parms.orvals[cr] = 0;
parms.andvals[cr] = ~(1L << bit);
+ preempt_disable();
smp_call_function(smp_ctl_bit_callback, &parms, 0, 1);
__ctl_clear_bit(cr, bit);
+ preempt_enable();
}
diff -Nru a/arch/s390x/kernel/time.c b/arch/s390x/kernel/time.c
--- a/arch/s390x/kernel/time.c Tue Mar 4 19:30:13 2003
+++ b/arch/s390x/kernel/time.c Tue Mar 4 19:30:13 2003
@@ -45,7 +45,7 @@
#define TICK_SIZE tick
-u64 jiffies_64;
+u64 jiffies_64 = INITIAL_JIFFIES;
static ext_int_info_t ext_int_info_timer;
static uint64_t xtime_cc;
diff -Nru a/arch/sh/kernel/fpu.c b/arch/sh/kernel/fpu.c
--- a/arch/sh/kernel/fpu.c Tue Mar 4 19:30:09 2003
+++ b/arch/sh/kernel/fpu.c Tue Mar 4 19:30:09 2003
@@ -118,7 +118,7 @@
/*
* Load the FPU with signalling NANS. This bit pattern we're using
- * has the property that no matter wether considered as single or as
+ * has the property that no matter whether considered as single or as
* double precission represents signaling NANS.
*/
diff -Nru a/arch/sh/kernel/irq.c b/arch/sh/kernel/irq.c
--- a/arch/sh/kernel/irq.c Tue Mar 4 19:30:07 2003
+++ b/arch/sh/kernel/irq.c Tue Mar 4 19:30:07 2003
@@ -61,7 +61,7 @@
{
/*
* 'what should we do if we get a hw irq event on an illegal vector'.
- * each architecture has to answer this themselves, it doesnt deserve
+ * each architecture has to answer this themselves, it doesn't deserve
* a generic callback i think.
*/
printk("unexpected IRQ trap at vector %02x\n", irq);
diff -Nru a/arch/sh/kernel/pci-dma.c b/arch/sh/kernel/pci-dma.c
--- a/arch/sh/kernel/pci-dma.c Tue Mar 4 19:30:12 2003
+++ b/arch/sh/kernel/pci-dma.c Tue Mar 4 19:30:12 2003
@@ -24,7 +24,7 @@
ret = (void *) __get_free_pages(gfp, get_order(size));
if (ret != NULL) {
- /* Is it neccessary to do the memset? */
+ /* Is it necessary to do the memset? */
memset(ret, 0, size);
*dma_handle = virt_to_bus(ret);
}
diff -Nru a/arch/sh/kernel/pci-sh7751.c b/arch/sh/kernel/pci-sh7751.c
--- a/arch/sh/kernel/pci-sh7751.c Tue Mar 4 19:30:11 2003
+++ b/arch/sh/kernel/pci-sh7751.c Tue Mar 4 19:30:11 2003
@@ -285,7 +285,7 @@
struct pci_ops *bios = NULL;
struct pci_ops *dir = NULL;
- PCIDBG(1,"PCI: Starting intialization.\n");
+ PCIDBG(1,"PCI: Starting initialization.\n");
#ifdef CONFIG_PCI_BIOS
if ((pci_probe & PCI_PROBE_BIOS) && ((bios = pci_find_bios()))) {
pci_probe |= PCI_BIOS_SORT;
diff -Nru a/arch/sh/kernel/time.c b/arch/sh/kernel/time.c
--- a/arch/sh/kernel/time.c Tue Mar 4 19:30:03 2003
+++ b/arch/sh/kernel/time.c Tue Mar 4 19:30:03 2003
@@ -70,7 +70,7 @@
#endif /* CONFIG_CPU_SUBTYPE_ST40STB1 */
#endif /* __sh3__ or __SH4__ */
-u64 jiffies_64;
+u64 jiffies_64 = INITIAL_JIFFIES;
extern unsigned long wall_jiffies;
#define TICK_SIZE tick
diff -Nru a/arch/sh/stboards/pcidma.c b/arch/sh/stboards/pcidma.c
--- a/arch/sh/stboards/pcidma.c Tue Mar 4 19:30:08 2003
+++ b/arch/sh/stboards/pcidma.c Tue Mar 4 19:30:08 2003
@@ -24,7 +24,7 @@
ret = (void *) __get_free_pages(gfp, get_order(size));
if (ret != NULL) {
- /* Is it neccessary to do the memset? */
+ /* Is it necessary to do the memset? */
memset(ret, 0, size);
*dma_handle = virt_to_bus(ret);
}
diff -Nru a/arch/sparc/Kconfig b/arch/sparc/Kconfig
--- a/arch/sparc/Kconfig Tue Mar 4 19:30:03 2003
+++ b/arch/sparc/Kconfig Tue Mar 4 19:30:03 2003
@@ -148,7 +148,7 @@
config MCA
bool
help
- EISA is not supported.
+ MCA is not supported.
Say N
config PCMCIA
diff -Nru a/arch/sparc/kernel/init_task.c b/arch/sparc/kernel/init_task.c
--- a/arch/sparc/kernel/init_task.c Tue Mar 4 19:30:10 2003
+++ b/arch/sparc/kernel/init_task.c Tue Mar 4 19:30:10 2003
@@ -12,7 +12,7 @@
struct mm_struct init_mm = INIT_MM(init_mm);
struct task_struct init_task = INIT_TASK(init_task);
-/* .text section in head.S is aligned at 8k boundry and this gets linked
+/* .text section in head.S is aligned at 8k boundary and this gets linked
* right after that so that the init_thread_union is aligned properly as well.
* If this is not aligned on a 8k boundry, then you should change code
* in etrap.S which assumes it.
diff -Nru a/arch/sparc/kernel/ioport.c b/arch/sparc/kernel/ioport.c
--- a/arch/sparc/kernel/ioport.c Tue Mar 4 19:30:14 2003
+++ b/arch/sparc/kernel/ioport.c Tue Mar 4 19:30:14 2003
@@ -599,7 +599,7 @@
* must match what was provided for in a previous pci_map_single call. All
* other usages are undefined.
*
- * After this call, reads by the cpu to the buffer are guarenteed to see
+ * After this call, reads by the cpu to the buffer are guaranteed to see
* whatever the device wrote there.
*/
void pci_unmap_single(struct pci_dev *hwdev, dma_addr_t ba, size_t size,
diff -Nru a/arch/sparc/kernel/time.c b/arch/sparc/kernel/time.c
--- a/arch/sparc/kernel/time.c Tue Mar 4 19:30:09 2003
+++ b/arch/sparc/kernel/time.c Tue Mar 4 19:30:09 2003
@@ -45,7 +45,7 @@
extern unsigned long wall_jiffies;
-u64 jiffies_64;
+u64 jiffies_64 = INITIAL_JIFFIES;
spinlock_t rtc_lock = SPIN_LOCK_UNLOCKED;
enum sparc_clock_type sp_clock_typ;
diff -Nru a/arch/sparc/lib/blockops.S b/arch/sparc/lib/blockops.S
--- a/arch/sparc/lib/blockops.S Tue Mar 4 19:30:14 2003
+++ b/arch/sparc/lib/blockops.S Tue Mar 4 19:30:14 2003
@@ -38,7 +38,7 @@
* and (2 * PAGE_SIZE) (for kernel stacks)
* and with a second arg of zero. We assume in
* all of these cases that the buffer is aligned
- * on at least an 8 byte boundry.
+ * on at least an 8 byte boundary.
*
* Therefore we special case them to make them
* as fast as possible.
diff -Nru a/arch/sparc/lib/checksum.S b/arch/sparc/lib/checksum.S
--- a/arch/sparc/lib/checksum.S Tue Mar 4 19:30:10 2003
+++ b/arch/sparc/lib/checksum.S Tue Mar 4 19:30:10 2003
@@ -336,7 +336,7 @@
bne cc_dword_align ! yes, we check for short lengths there
andcc %g1, 0xffffff80, %g0 ! can we use unrolled loop?
3: be 3f ! nope, less than one loop remains
- andcc %o1, 4, %g0 ! dest aligned on 4 or 8 byte boundry?
+ andcc %o1, 4, %g0 ! dest aligned on 4 or 8 byte boundary?
be ccdbl + 4 ! 8 byte aligned, kick ass
5: CSUMCOPY_BIGCHUNK(%o0,%o1,%g7,0x00,%o4,%o5,%g2,%g3,%g4,%g5,%o2,%o3)
CSUMCOPY_BIGCHUNK(%o0,%o1,%g7,0x20,%o4,%o5,%g2,%g3,%g4,%g5,%o2,%o3)
diff -Nru a/arch/sparc/mm/iommu.c b/arch/sparc/mm/iommu.c
--- a/arch/sparc/mm/iommu.c Tue Mar 4 19:30:14 2003
+++ b/arch/sparc/mm/iommu.c Tue Mar 4 19:30:14 2003
@@ -112,7 +112,7 @@
for (i = 6; i < 9; i++)
if ((1 << (i + PAGE_SHIFT)) == ptsize)
break;
- tmp = __get_free_pages(GFP_DMA, i);
+ tmp = __get_free_pages(GFP_KERNEL, i);
if (!tmp) {
prom_printf("Could not allocate iopte of size 0x%08x\n", ptsize);
prom_halt();
diff -Nru a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c
--- a/arch/sparc/mm/srmmu.c Tue Mar 4 19:30:10 2003
+++ b/arch/sparc/mm/srmmu.c Tue Mar 4 19:30:10 2003
@@ -2120,7 +2120,7 @@
srmmu_is_bad();
}
-/* dont laugh, static pagetables */
+/* don't laugh, static pagetables */
static void srmmu_check_pgt_cache(int low, int high)
{
}
diff -Nru a/arch/sparc/mm/sun4c.c b/arch/sparc/mm/sun4c.c
--- a/arch/sparc/mm/sun4c.c Tue Mar 4 19:30:12 2003
+++ b/arch/sparc/mm/sun4c.c Tue Mar 4 19:30:12 2003
@@ -533,7 +533,7 @@
}
}
-/* Addr is always aligned on a page boundry for us already. */
+/* Addr is always aligned on a page boundary for us already. */
static void sun4c_map_dma_area(unsigned long va, u32 addr, int len)
{
unsigned long page, end;
@@ -1042,7 +1042,7 @@
get_locked_segment(addr);
/* We are changing the virtual color of the page(s)
- * so we must flush the cache to guarentee consistency.
+ * so we must flush the cache to guarantee consistency.
*/
sun4c_flush_page(pages);
#ifndef CONFIG_SUN4
diff -Nru a/arch/sparc64/Kconfig b/arch/sparc64/Kconfig
--- a/arch/sparc64/Kconfig Tue Mar 4 19:30:13 2003
+++ b/arch/sparc64/Kconfig Tue Mar 4 19:30:13 2003
@@ -150,18 +150,6 @@
If in doubt, say N.
-config CPU_FREQ_PROC_INTF
- tristate "/proc/cpufreq interface (DEPRECATED)"
- depends on CPU_FREQ && PROC_FS
- help
- This enables the /proc/cpufreq interface for controlling
- CPUFreq. Please note that it is recommended to use the sysfs
- interface instead (which is built automatically).
-
- For details, take a look at linux/Documentation/cpufreq.
-
- If in doubt, say N.
-
config CPU_FREQ_TABLE
tristate
default y
@@ -176,6 +164,8 @@
For details, take a look at linux/Documentation/cpufreq.
If in doubt, say N.
+
+source "drivers/cpufreq/Kconfig"
# Identify this as a Sparc64 build
config SPARC64
diff -Nru a/arch/sparc64/defconfig b/arch/sparc64/defconfig
--- a/arch/sparc64/defconfig Tue Mar 4 19:30:05 2003
+++ b/arch/sparc64/defconfig Tue Mar 4 19:30:05 2003
@@ -15,12 +15,6 @@
CONFIG_SYSVIPC=y
# CONFIG_BSD_PROCESS_ACCT is not set
CONFIG_SYSCTL=y
-# CONFIG_LOG_BUF_SHIFT_17 is not set
-# CONFIG_LOG_BUF_SHIFT_16 is not set
-CONFIG_LOG_BUF_SHIFT_15=y
-# CONFIG_LOG_BUF_SHIFT_14 is not set
-# CONFIG_LOG_BUF_SHIFT_13 is not set
-# CONFIG_LOG_BUF_SHIFT_12 is not set
CONFIG_LOG_BUF_SHIFT=15
#
@@ -30,6 +24,7 @@
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
CONFIG_OBSOLETE_MODPARM=y
+# CONFIG_MODVERSIONS is not set
CONFIG_KMOD=y
#
@@ -45,6 +40,7 @@
CONFIG_NR_CPUS=4
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_PROC_INTF=y
+CONFIG_CPU_FREQ_TABLE=y
CONFIG_US3_FREQ=m
CONFIG_SPARC64=y
CONFIG_HOTPLUG=y
@@ -89,15 +85,52 @@
#
# Graphics support
#
-# CONFIG_FB is not set
+CONFIG_FB=y
+# CONFIG_FB_CLGEN is not set
+# CONFIG_FB_PM2 is not set
+# CONFIG_FB_CYBER2000 is not set
+# CONFIG_FB_IMSTT is not set
+# CONFIG_FB_BW2 is not set
+# CONFIG_FB_CG3 is not set
+CONFIG_FB_CG6=y
+# CONFIG_FB_RIVA is not set
+# CONFIG_FB_MATROX is not set
+# CONFIG_FB_RADEON is not set
+# CONFIG_FB_ATY128 is not set
+# CONFIG_FB_ATY is not set
+# CONFIG_FB_SIS is not set
+# CONFIG_FB_NEOMAGIC is not set
+# CONFIG_FB_3DFX is not set
+# CONFIG_FB_VOODOO1 is not set
+# CONFIG_FB_TRIDENT is not set
+# CONFIG_FB_PM3 is not set
+CONFIG_FB_SBUS=y
+CONFIG_FB_FFB=y
+# CONFIG_FB_TCX is not set
+# CONFIG_FB_CG14 is not set
+# CONFIG_FB_P9100 is not set
+# CONFIG_FB_LEO is not set
+# CONFIG_FB_PCI is not set
+# CONFIG_FB_VIRTUAL is not set
#
# Console display driver support
#
# CONFIG_VGA_CONSOLE is not set
# CONFIG_MDA_CONSOLE is not set
-CONFIG_PROM_CONSOLE=y
+# CONFIG_PROM_CONSOLE is not set
CONFIG_DUMMY_CONSOLE=y
+CONFIG_FRAMEBUFFER_CONSOLE=y
+CONFIG_PCI_CONSOLE=y
+# CONFIG_FBCON_ADVANCED is not set
+CONFIG_FONT_SUN8x16=y
+# CONFIG_FONT_SUN12x22 is not set
+CONFIG_FONTS=y
+# CONFIG_FONT_8x8 is not set
+# CONFIG_FONT_8x16 is not set
+# CONFIG_FONT_6x11 is not set
+# CONFIG_FONT_PEARL_8x8 is not set
+# CONFIG_FONT_ACORN_8x8 is not set
#
# Serial drivers
@@ -110,7 +143,7 @@
CONFIG_SERIAL_SUNCORE=y
CONFIG_SERIAL_SUNZILOG=y
CONFIG_SERIAL_SUNSU=y
-# CONFIG_SERIAL_SUNSAB is not set
+CONFIG_SERIAL_SUNSAB=m
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
@@ -190,7 +223,7 @@
CONFIG_BLK_DEV_ALI15X3=y
# CONFIG_WDC_ALI15X3 is not set
# CONFIG_BLK_DEV_AMD74XX is not set
-CONFIG_BLK_DEV_CMD64X=y
+# CONFIG_BLK_DEV_CMD64X is not set
# CONFIG_BLK_DEV_TRIFLEX is not set
# CONFIG_BLK_DEV_CY82C693 is not set
# CONFIG_BLK_DEV_CS5520 is not set
@@ -198,7 +231,7 @@
# CONFIG_BLK_DEV_HPT366 is not set
# CONFIG_BLK_DEV_SC1200 is not set
# CONFIG_BLK_DEV_PIIX is not set
-CONFIG_BLK_DEV_NS87415=y
+# CONFIG_BLK_DEV_NS87415 is not set
# CONFIG_BLK_DEV_OPTI621 is not set
# CONFIG_BLK_DEV_PDC202XX_OLD is not set
# CONFIG_BLK_DEV_PDC202XX_NEW is not set
@@ -221,9 +254,9 @@
#
CONFIG_BLK_DEV_SD=y
CONFIG_SD_EXTRA_DEVS=40
-CONFIG_CHR_DEV_ST=y
+CONFIG_CHR_DEV_ST=m
CONFIG_CHR_DEV_OSST=m
-CONFIG_BLK_DEV_SR=y
+CONFIG_BLK_DEV_SR=m
CONFIG_BLK_DEV_SR_VENDOR=y
CONFIG_SR_EXTRA_DEVS=2
CONFIG_CHR_DEV_SG=m
@@ -320,11 +353,12 @@
CONFIG_INET_ESP=y
CONFIG_XFRM_USER=m
CONFIG_IPV6=m
+CONFIG_IPV6_PRIVACY=y
#
# SCTP Configuration (EXPERIMENTAL)
#
-CONFIG_IPV6_SCTP__=y
+CONFIG_IPV6_SCTP__=m
CONFIG_IP_SCTP=m
# CONFIG_SCTP_ADLER32 is not set
# CONFIG_SCTP_DBG_MSG is not set
@@ -395,6 +429,7 @@
# Ethernet (10 or 100Mbit)
#
CONFIG_NET_ETHERNET=y
+# CONFIG_MII is not set
CONFIG_SUNLANCE=y
CONFIG_HAPPYMEAL=y
CONFIG_SUNBMAC=m
@@ -402,6 +437,7 @@
CONFIG_SUNGEM=y
CONFIG_NET_VENDOR_3COM=y
CONFIG_VORTEX=m
+CONFIG_TYPHOON=m
#
# Tulip family network device support
@@ -419,9 +455,11 @@
CONFIG_PCNET32=m
# CONFIG_AMD8111_ETH is not set
CONFIG_ADAPTEC_STARFIRE=m
+CONFIG_ADAPTEC_STARFIRE_NAPI=y
CONFIG_B44=m
CONFIG_DGRS=m
CONFIG_EEPRO100=m
+# CONFIG_EEPRO100_PIO is not set
CONFIG_E100=m
CONFIG_FEALNX=m
CONFIG_NATSEMI=m
@@ -940,6 +978,7 @@
CONFIG_BT_HCIUART=m
CONFIG_BT_HCIUART_H4=y
CONFIG_BT_HCIUART_BCSP=y
+CONFIG_BT_HCIUART_BCSP_TXCRC=y
CONFIG_BT_HCIVHCI=m
#
diff -Nru a/arch/sparc64/kernel/Makefile b/arch/sparc64/kernel/Makefile
--- a/arch/sparc64/kernel/Makefile Tue Mar 4 19:30:05 2003
+++ b/arch/sparc64/kernel/Makefile Tue Mar 4 19:30:05 2003
@@ -3,7 +3,7 @@
#
EXTRA_AFLAGS := -ansi
-CFLAGS += -Werror
+EXTRA_CFLAGS := -Werror
EXTRA_TARGETS := head.o init_task.o
diff -Nru a/arch/sparc64/kernel/init_task.c b/arch/sparc64/kernel/init_task.c
--- a/arch/sparc64/kernel/init_task.c Tue Mar 4 19:30:14 2003
+++ b/arch/sparc64/kernel/init_task.c Tue Mar 4 19:30:14 2003
@@ -12,7 +12,7 @@
static struct sighand_struct init_sighand = INIT_SIGHAND(init_sighand);
struct mm_struct init_mm = INIT_MM(init_mm);
-/* .text section in head.S is aligned at 2 page boundry and this gets linked
+/* .text section in head.S is aligned at 2 page boundary and this gets linked
* right after that so that the init_thread_union is aligned properly as well.
* We really don't need this special alignment like the Intel does, but
* I do it anyways for completeness.
diff -Nru a/arch/sparc64/kernel/ioctl32.c b/arch/sparc64/kernel/ioctl32.c
--- a/arch/sparc64/kernel/ioctl32.c Tue Mar 4 19:30:13 2003
+++ b/arch/sparc64/kernel/ioctl32.c Tue Mar 4 19:30:13 2003
@@ -3947,7 +3947,7 @@
*
* But how to keep track of these kernel buffers? We'd need to either
* keep track of them in some table _or_ know about usbdevicefs internals
- * (ie. the exact layout of it's file private, which is actually defined
+ * (ie. the exact layout of its file private, which is actually defined
* in linux/usbdevice_fs.h, the layout of the async queues are private to
* devio.c)
*
diff -Nru a/arch/sparc64/kernel/iommu_common.h b/arch/sparc64/kernel/iommu_common.h
--- a/arch/sparc64/kernel/iommu_common.h Tue Mar 4 19:30:14 2003
+++ b/arch/sparc64/kernel/iommu_common.h Tue Mar 4 19:30:14 2003
@@ -40,7 +40,7 @@
/* Two addresses are "virtually contiguous" if and only if:
* 1) They are equal, or...
- * 2) They are both on a page boundry
+ * 2) They are both on a page boundary
*/
#define VCONTIG(__X, __Y) (((__X) == (__Y)) || \
(((__X) | (__Y)) << (64UL - PAGE_SHIFT)) == 0UL)
diff -Nru a/arch/sparc64/kernel/pci_common.c b/arch/sparc64/kernel/pci_common.c
--- a/arch/sparc64/kernel/pci_common.c Tue Mar 4 19:30:05 2003
+++ b/arch/sparc64/kernel/pci_common.c Tue Mar 4 19:30:05 2003
@@ -583,7 +583,7 @@
* the PBM.
*
* However if that parent bridge has interrupt map/mask
- * properties of it's own we use the PROM register property
+ * properties of its own we use the PROM register property
* of the next child device on the path to PDEV.
*
* In detail the two cases are (note that the 'X' below is the
diff -Nru a/arch/sparc64/kernel/sbus.c b/arch/sparc64/kernel/sbus.c
--- a/arch/sparc64/kernel/sbus.c Tue Mar 4 19:30:14 2003
+++ b/arch/sparc64/kernel/sbus.c Tue Mar 4 19:30:14 2003
@@ -24,7 +24,7 @@
#include "iommu_common.h"
/* These should be allocated on an SMP_CACHE_BYTES
- * aligned boundry for optimal performance.
+ * aligned boundary for optimal performance.
*
* On SYSIO, using an 8K page size we have 1GB of SBUS
* DMA space mapped. We divide this space into equally
diff -Nru a/arch/sparc64/kernel/setup.c b/arch/sparc64/kernel/setup.c
--- a/arch/sparc64/kernel/setup.c Tue Mar 4 19:30:07 2003
+++ b/arch/sparc64/kernel/setup.c Tue Mar 4 19:30:07 2003
@@ -688,6 +688,7 @@
sparc64_cpus = kmalloc(NR_CPUS * sizeof(struct cpu), GFP_KERNEL);
if (!sparc64_cpus)
return -ENOMEM;
+ memset(sparc64_cpus, 0, NR_CPUS * sizeof(struct cpu));
for (i = 0; i < NR_CPUS; i++) {
if (cpu_possible(i))
register_cpu(&sparc64_cpus[i], i, NULL);
diff -Nru a/arch/sparc64/kernel/sparc64_ksyms.c b/arch/sparc64/kernel/sparc64_ksyms.c
--- a/arch/sparc64/kernel/sparc64_ksyms.c Tue Mar 4 19:30:13 2003
+++ b/arch/sparc64/kernel/sparc64_ksyms.c Tue Mar 4 19:30:13 2003
@@ -114,6 +114,8 @@
extern unsigned long phys_base;
extern unsigned long pfn_base;
+extern unsigned int sys_call_table[];
+
/* used by various drivers */
#ifdef CONFIG_SMP
#ifndef CONFIG_DEBUG_SPINLOCK
@@ -374,3 +376,6 @@
/* for ns8703 */
EXPORT_SYMBOL(ns87303_lock);
+
+/* for solaris compat module */
+EXPORT_SYMBOL_GPL(sys_call_table);
diff -Nru a/arch/sparc64/kernel/time.c b/arch/sparc64/kernel/time.c
--- a/arch/sparc64/kernel/time.c Tue Mar 4 19:30:08 2003
+++ b/arch/sparc64/kernel/time.c Tue Mar 4 19:30:08 2003
@@ -47,7 +47,7 @@
extern unsigned long wall_jiffies;
-u64 jiffies_64;
+u64 jiffies_64 = INITIAL_JIFFIES;
static unsigned long mstk48t08_regs = 0UL;
static unsigned long mstk48t59_regs = 0UL;
diff -Nru a/arch/sparc64/kernel/traps.c b/arch/sparc64/kernel/traps.c
--- a/arch/sparc64/kernel/traps.c Tue Mar 4 19:30:14 2003
+++ b/arch/sparc64/kernel/traps.c Tue Mar 4 19:30:14 2003
@@ -571,7 +571,7 @@
unsigned long flush_linesize = ecache_flush_linesize;
unsigned long flush_size = ecache_flush_size;
- /* Run through the whole cache to guarentee the timed loop
+ /* Run through the whole cache to guarantee the timed loop
* is really displacing cache lines.
*/
__asm__ __volatile__("1: subcc %0, %4, %0\n\t"
diff -Nru a/arch/sparc64/kernel/us3_cpufreq.c b/arch/sparc64/kernel/us3_cpufreq.c
--- a/arch/sparc64/kernel/us3_cpufreq.c Tue Mar 4 19:30:04 2003
+++ b/arch/sparc64/kernel/us3_cpufreq.c Tue Mar 4 19:30:04 2003
@@ -186,12 +186,16 @@
set_cpus_allowed(current, cpus_allowed);
}
-static int us3freq_setpolicy(struct cpufreq_policy *policy)
+static int us3freq_target(struct cpufreq_policy *policy,
+ unsigned int target_freq,
+ unsigned int relation)
{
unsigned int new_index = 0;
- if (cpufreq_frequency_table_setpolicy(policy,
+ if (cpufreq_frequency_table_target(policy,
&us3_freq_table[policy->cpu].table[0],
+ target_freq,
+ relation,
&new_index))
return -EINVAL;
@@ -224,6 +228,7 @@
policy->policy = CPUFREQ_POLICY_PERFORMANCE;
policy->cpuinfo.transition_latency = 0;
+ policy->cur = clock_tick;
return cpufreq_frequency_table_cpuinfo(policy, table);
}
@@ -268,7 +273,7 @@
(NR_CPUS * sizeof(struct us3_freq_percpu_info)));
driver->verify = us3freq_verify;
- driver->setpolicy = us3freq_setpolicy;
+ driver->target = us3freq_target;
driver->init = us3freq_cpu_init;
driver->exit = us3freq_cpu_exit;
strcpy(driver->name, "UltraSPARC-III");
diff -Nru a/arch/sparc64/lib/Makefile b/arch/sparc64/lib/Makefile
--- a/arch/sparc64/lib/Makefile Tue Mar 4 19:30:04 2003
+++ b/arch/sparc64/lib/Makefile Tue Mar 4 19:30:04 2003
@@ -3,7 +3,7 @@
#
EXTRA_AFLAGS := -ansi
-CFLAGS += -Werror
+EXTRA_CFLAGS := -Werror
L_TARGET = lib.a
obj-y := PeeCeeI.o blockops.o debuglocks.o strlen.o strncmp.o \
diff -Nru a/arch/sparc64/lib/U3copy_from_user.S b/arch/sparc64/lib/U3copy_from_user.S
--- a/arch/sparc64/lib/U3copy_from_user.S Tue Mar 4 19:30:05 2003
+++ b/arch/sparc64/lib/U3copy_from_user.S Tue Mar 4 19:30:05 2003
@@ -416,7 +416,7 @@
2: VISEntryHalf ! MS+MS
- /* Compute (len - (len % 8)) into %g2. This is guarenteed
+ /* Compute (len - (len % 8)) into %g2. This is guaranteed
* to be nonzero.
*/
andn %o2, 0x7, %g2 ! A0 Group
@@ -425,7 +425,7 @@
* one 8-byte longword past the end of src. It actually
* does not, as %g2 is subtracted as loads are done from
* src, so we always stop before running off the end.
- * Also, we are guarenteed to have at least 0x10 bytes
+ * Also, we are guaranteed to have at least 0x10 bytes
* to move here.
*/
sub %g2, 0x8, %g2 ! A0 Group (reg-dep)
diff -Nru a/arch/sparc64/lib/U3copy_in_user.S b/arch/sparc64/lib/U3copy_in_user.S
--- a/arch/sparc64/lib/U3copy_in_user.S Tue Mar 4 19:30:07 2003
+++ b/arch/sparc64/lib/U3copy_in_user.S Tue Mar 4 19:30:07 2003
@@ -447,7 +447,7 @@
2: VISEntryHalf ! MS+MS
- /* Compute (len - (len % 8)) into %g2. This is guarenteed
+ /* Compute (len - (len % 8)) into %g2. This is guaranteed
* to be nonzero.
*/
andn %o2, 0x7, %g2 ! A0 Group
@@ -456,7 +456,7 @@
* one 8-byte longword past the end of src. It actually
* does not, as %g2 is subtracted as loads are done from
* src, so we always stop before running off the end.
- * Also, we are guarenteed to have at least 0x10 bytes
+ * Also, we are guaranteed to have at least 0x10 bytes
* to move here.
*/
sub %g2, 0x8, %g2 ! A0 Group (reg-dep)
diff -Nru a/arch/sparc64/lib/U3copy_to_user.S b/arch/sparc64/lib/U3copy_to_user.S
--- a/arch/sparc64/lib/U3copy_to_user.S Tue Mar 4 19:30:03 2003
+++ b/arch/sparc64/lib/U3copy_to_user.S Tue Mar 4 19:30:03 2003
@@ -463,7 +463,7 @@
2: VISEntryHalf ! MS+MS
- /* Compute (len - (len % 8)) into %g2. This is guarenteed
+ /* Compute (len - (len % 8)) into %g2. This is guaranteed
* to be nonzero.
*/
andn %o2, 0x7, %g2 ! A0 Group
@@ -472,7 +472,7 @@
* one 8-byte longword past the end of src. It actually
* does not, as %g2 is subtracted as loads are done from
* src, so we always stop before running off the end.
- * Also, we are guarenteed to have at least 0x10 bytes
+ * Also, we are guaranteed to have at least 0x10 bytes
* to move here.
*/
sub %g2, 0x8, %g2 ! A0 Group (reg-dep)
diff -Nru a/arch/sparc64/lib/U3memcpy.S b/arch/sparc64/lib/U3memcpy.S
--- a/arch/sparc64/lib/U3memcpy.S Tue Mar 4 19:30:14 2003
+++ b/arch/sparc64/lib/U3memcpy.S Tue Mar 4 19:30:14 2003
@@ -344,7 +344,7 @@
2: VISEntryHalf ! MS+MS
- /* Compute (len - (len % 8)) into %g2. This is guarenteed
+ /* Compute (len - (len % 8)) into %g2. This is guaranteed
* to be nonzero.
*/
andn %o2, 0x7, %g2 ! A0 Group
@@ -353,7 +353,7 @@
* one 8-byte longword past the end of src. It actually
* does not, as %g2 is subtracted as loads are done from
* src, so we always stop before running off the end.
- * Also, we are guarenteed to have at least 0x10 bytes
+ * Also, we are guaranteed to have at least 0x10 bytes
* to move here.
*/
sub %g2, 0x8, %g2 ! A0 Group (reg-dep)
diff -Nru a/arch/sparc64/mm/Makefile b/arch/sparc64/mm/Makefile
--- a/arch/sparc64/mm/Makefile Tue Mar 4 19:30:12 2003
+++ b/arch/sparc64/mm/Makefile Tue Mar 4 19:30:12 2003
@@ -3,7 +3,7 @@
#
EXTRA_AFLAGS := -ansi
-CFLAGS += -Werror
+EXTRA_CFLAGS := -Werror
obj-y := ultra.o fault.o init.o generic.o extable.o
diff -Nru a/arch/sparc64/mm/hugetlbpage.c b/arch/sparc64/mm/hugetlbpage.c
--- a/arch/sparc64/mm/hugetlbpage.c Tue Mar 4 19:30:11 2003
+++ b/arch/sparc64/mm/hugetlbpage.c Tue Mar 4 19:30:11 2003
@@ -25,6 +25,7 @@
extern long htlbpagemem;
static void zap_hugetlb_resources(struct vm_area_struct *);
+void free_huge_page(struct page *page);
#define MAX_ID 32
struct htlbpagekey {
@@ -64,6 +65,7 @@
spin_unlock(&htlbpage_lock);
set_page_count(page, 1);
+ page->lru.prev = (void *)free_huge_page;
memset(page_address(page), 0, HPAGE_SIZE);
return page;
diff -Nru a/arch/sparc64/prom/Makefile b/arch/sparc64/prom/Makefile
--- a/arch/sparc64/prom/Makefile Tue Mar 4 19:30:13 2003
+++ b/arch/sparc64/prom/Makefile Tue Mar 4 19:30:13 2003
@@ -4,7 +4,7 @@
#
EXTRA_AFLAGS := -ansi
-CFLAGS += -Werror
+EXTRA_CFLAGS := -Werror
L_TARGET = lib.a
obj-y := bootstr.o devops.o init.o memory.o misc.o \
diff -Nru a/arch/sparc64/prom/misc.c b/arch/sparc64/prom/misc.c
--- a/arch/sparc64/prom/misc.c Tue Mar 4 19:30:12 2003
+++ b/arch/sparc64/prom/misc.c Tue Mar 4 19:30:12 2003
@@ -142,7 +142,7 @@
return prom_prev;
}
-/* Install Linux trap table so PROM uses that instead of it's own. */
+/* Install Linux trap table so PROM uses that instead of its own. */
void prom_set_trap_table(unsigned long tba)
{
p1275_cmd("SUNW,set-trap-table", P1275_INOUT(1, 0), tba);
diff -Nru a/arch/sparc64/solaris/entry64.S b/arch/sparc64/solaris/entry64.S
--- a/arch/sparc64/solaris/entry64.S Tue Mar 4 19:30:03 2003
+++ b/arch/sparc64/solaris/entry64.S Tue Mar 4 19:30:03 2003
@@ -16,6 +16,7 @@
#include
#include
#include
+#include
#include "conv.h"
diff -Nru a/arch/um/kernel/irq.c b/arch/um/kernel/irq.c
--- a/arch/um/kernel/irq.c Tue Mar 4 19:30:12 2003
+++ b/arch/um/kernel/irq.c Tue Mar 4 19:30:12 2003
@@ -45,7 +45,7 @@
{
/*
* 'what should we do if we get a hw irq event on an illegal vector'.
- * each architecture has to answer this themselves, it doesnt deserve
+ * each architecture has to answer this themselves, it doesn't deserve
* a generic callback i think.
*/
#if CONFIG_X86
diff -Nru a/arch/v850/kernel/irq.c b/arch/v850/kernel/irq.c
--- a/arch/v850/kernel/irq.c Tue Mar 4 19:30:12 2003
+++ b/arch/v850/kernel/irq.c Tue Mar 4 19:30:12 2003
@@ -48,7 +48,7 @@
{
/*
* 'what should we do if we get a hw irq event on an illegal vector'.
- * each architecture has to answer this themselves, it doesnt deserve
+ * each architecture has to answer this themselves, it doesn't deserve
* a generic callback i think.
*/
printk("received IRQ %d with unknown interrupt type\n", irq);
diff -Nru a/arch/v850/kernel/ma.c b/arch/v850/kernel/ma.c
--- a/arch/v850/kernel/ma.c Tue Mar 4 19:30:04 2003
+++ b/arch/v850/kernel/ma.c Tue Mar 4 19:30:04 2003
@@ -61,7 +61,7 @@
specific chips may have more). */
if (chan < 2) {
unsigned bits = 0x3 << (chan * 3);
- /* Specify that the relevent pins on the chip should do
+ /* Specify that the relevant pins on the chip should do
serial I/O, not direct I/O. */
MA_PORT4_PMC |= bits;
/* Specify that we're using the UART, not the CSI device. */
diff -Nru a/arch/v850/kernel/rte_cb_multi.c b/arch/v850/kernel/rte_cb_multi.c
--- a/arch/v850/kernel/rte_cb_multi.c Tue Mar 4 19:30:05 2003
+++ b/arch/v850/kernel/rte_cb_multi.c Tue Mar 4 19:30:05 2003
@@ -67,7 +67,7 @@
if ((word & 0xFC0) == 0x780) {
/* A `jr' insn, fix up its offset (and yes, the
- wierd half-word swapping is intentional). */
+ weird half-word swapping is intentional). */
unsigned short hi = word & 0xFFFF;
unsigned short lo = word >> 16;
unsigned long udisp22
diff -Nru a/arch/v850/kernel/rte_ma1_cb.c b/arch/v850/kernel/rte_ma1_cb.c
--- a/arch/v850/kernel/rte_ma1_cb.c Tue Mar 4 19:30:12 2003
+++ b/arch/v850/kernel/rte_ma1_cb.c Tue Mar 4 19:30:12 2003
@@ -93,7 +93,7 @@
/* Turn on the timer. */
NB85E_TIMER_C_TMCC0 (tc) |= NB85E_TIMER_C_TMCC0_CAE;
- /* Make sure the relevent port0/port1 pins are assigned
+ /* Make sure the relevant port0/port1 pins are assigned
interrupt duty. We used INTP001-INTP011 (don't screw with
INTP000 because the monitor uses it). */
MA_PORT0_PMC |= 0x4; /* P02 (INTP001) in IRQ mode. */
diff -Nru a/arch/v850/kernel/time.c b/arch/v850/kernel/time.c
--- a/arch/v850/kernel/time.c Tue Mar 4 19:30:11 2003
+++ b/arch/v850/kernel/time.c Tue Mar 4 19:30:11 2003
@@ -25,7 +25,7 @@
#include "mach.h"
-u64 jiffies_64;
+u64 jiffies_64 = INITIAL_JIFFIES;
#define TICK_SIZE (tick_nsec / 1000)
diff -Nru a/arch/x86_64/ia32/ia32_ioctl.c b/arch/x86_64/ia32/ia32_ioctl.c
--- a/arch/x86_64/ia32/ia32_ioctl.c Tue Mar 4 19:30:09 2003
+++ b/arch/x86_64/ia32/ia32_ioctl.c Tue Mar 4 19:30:09 2003
@@ -3196,7 +3196,7 @@
*
* But how to keep track of these kernel buffers? We'd need to either
* keep track of them in some table _or_ know about usbdevicefs internals
- * (ie. the exact layout of it's file private, which is actually defined
+ * (ie. the exact layout of its file private, which is actually defined
* in linux/usbdevice_fs.h, the layout of the async queues are private to
* devio.c)
*
diff -Nru a/arch/x86_64/kernel/apic.c b/arch/x86_64/kernel/apic.c
--- a/arch/x86_64/kernel/apic.c Tue Mar 4 19:30:04 2003
+++ b/arch/x86_64/kernel/apic.c Tue Mar 4 19:30:04 2003
@@ -292,7 +292,7 @@
__error_in_apic_c();
/*
- * Double-check wether this APIC is really registered.
+ * Double-check whether this APIC is really registered.
* This is meaningless in clustered apic mode, so we skip it.
*/
if (!clustered_apic_mode &&
@@ -948,7 +948,7 @@
/*
* Local APIC timer interrupt. This is the most natural way for doing
* local interrupts, but local timer interrupts can be emulated by
- * broadcast interrupts too. [in case the hw doesnt support APIC timers]
+ * broadcast interrupts too. [in case the hw doesn't support APIC timers]
*
* [ if a single-CPU system runs an SMP kernel then we call the local
* interrupt as well. Thus we cannot inline the local irq ... ]
diff -Nru a/arch/x86_64/kernel/bluesmoke.c b/arch/x86_64/kernel/bluesmoke.c
--- a/arch/x86_64/kernel/bluesmoke.c Tue Mar 4 19:30:13 2003
+++ b/arch/x86_64/kernel/bluesmoke.c Tue Mar 4 19:30:13 2003
@@ -111,16 +111,12 @@
{
u32 low, high;
int i;
- unsigned int *cpu = info;
- BUG_ON (*cpu != smp_processor_id());
-
- preempt_disable();
for (i=0; isize = mincount;
wmb();
if (reload) {
- load_LDT(pc);
#ifdef CONFIG_SMP
preempt_disable();
+ load_LDT(pc);
if (current->mm->cpu_vm_mask != (1<>10);
return 0;
diff -Nru a/arch/x86_64/kernel/smp.c b/arch/x86_64/kernel/smp.c
--- a/arch/x86_64/kernel/smp.c Tue Mar 4 19:30:05 2003
+++ b/arch/x86_64/kernel/smp.c Tue Mar 4 19:30:05 2003
@@ -328,7 +328,7 @@
preempt_enable();
}
-static inline void do_flush_tlb_all_local(void)
+static void do_flush_tlb_all(void* info)
{
unsigned long cpu = smp_processor_id();
@@ -337,18 +337,9 @@
leave_mm(cpu);
}
-static void flush_tlb_all_ipi(void* info)
-{
- do_flush_tlb_all_local();
-}
-
void flush_tlb_all(void)
{
- preempt_disable();
- smp_call_function (flush_tlb_all_ipi,0,1,1);
-
- do_flush_tlb_all_local();
- preempt_enable();
+ on_each_cpu(do_flush_tlb_all, 0, 1, 1);
}
void smp_kdb_stop(void)
diff -Nru a/arch/x86_64/kernel/smpboot.c b/arch/x86_64/kernel/smpboot.c
--- a/arch/x86_64/kernel/smpboot.c Tue Mar 4 19:30:14 2003
+++ b/arch/x86_64/kernel/smpboot.c Tue Mar 4 19:30:14 2003
@@ -104,7 +104,7 @@
/*
* TSC synchronization.
*
- * We first check wether all CPUs have their TSC's synchronized,
+ * We first check whether all CPUs have their TSC's synchronized,
* then we print a warning if not, and always resync.
*/
@@ -774,7 +774,7 @@
}
/*
- * If we couldnt find an SMP configuration at boot time,
+ * If we couldn't find an SMP configuration at boot time,
* get out of here now!
*/
if (!smp_found_config) {
diff -Nru a/arch/x86_64/kernel/time.c b/arch/x86_64/kernel/time.c
--- a/arch/x86_64/kernel/time.c Tue Mar 4 19:30:11 2003
+++ b/arch/x86_64/kernel/time.c Tue Mar 4 19:30:11 2003
@@ -30,7 +30,7 @@
#include
#endif
-u64 jiffies_64;
+u64 jiffies_64;
extern int using_apic_timer;
diff -Nru a/arch/x86_64/mm/ioremap.c b/arch/x86_64/mm/ioremap.c
--- a/arch/x86_64/mm/ioremap.c Tue Mar 4 19:30:13 2003
+++ b/arch/x86_64/mm/ioremap.c Tue Mar 4 19:30:13 2003
@@ -205,6 +205,7 @@
iounmap(p);
p = NULL;
}
+ global_flush_tlb();
}
return p;
@@ -226,6 +227,7 @@
change_page_attr(virt_to_page(__va(p->phys_addr)),
p->size >> PAGE_SHIFT,
PAGE_KERNEL);
+ global_flush_tlb();
}
kfree(p);
}
diff -Nru a/arch/x86_64/mm/pageattr.c b/arch/x86_64/mm/pageattr.c
--- a/arch/x86_64/mm/pageattr.c Tue Mar 4 19:30:08 2003
+++ b/arch/x86_64/mm/pageattr.c Tue Mar 4 19:30:08 2003
@@ -123,12 +123,7 @@
static inline void flush_map(unsigned long address)
{
- preempt_disable();
-#ifdef CONFIG_SMP
- smp_call_function(flush_kernel_map, (void *)address, 1, 1);
-#endif
- flush_kernel_map((void *)address);
- preempt_enable();
+ on_each_cpu(flush_kernel_map, (void *)address, 1, 1);
}
struct deferred_page {
diff -Nru a/drivers/acorn/block/fd1772.c b/drivers/acorn/block/fd1772.c
--- a/drivers/acorn/block/fd1772.c Tue Mar 4 19:30:05 2003
+++ b/drivers/acorn/block/fd1772.c Tue Mar 4 19:30:05 2003
@@ -1081,7 +1081,7 @@
MotorOn = 1;
START_TIMEOUT();
/* we must wait for the IRQ here, because the ST-DMA is
- * released immediatly afterwards and the interrupt may be
+ * released immediately afterwards and the interrupt may be
* delivered to the wrong driver.
*/
}
diff -Nru a/drivers/acorn/block/mfmhd.c b/drivers/acorn/block/mfmhd.c
--- a/drivers/acorn/block/mfmhd.c Tue Mar 4 19:30:05 2003
+++ b/drivers/acorn/block/mfmhd.c Tue Mar 4 19:30:05 2003
@@ -406,7 +406,7 @@
outw(command, MFM_COMMAND);
status = inw(MFM_STATUS);
- DBG("issue_command: status immediatly after command issue: %02X:\n ", status >> 8);
+ DBG("issue_command: status immediately after command issue: %02X:\n ", status >> 8);
}
static void wait_for_completion(void)
@@ -451,7 +451,7 @@
return;
};
- /* OK so what ever happend its not an error, now I reckon we are left between
+ /* OK so what ever happened it's not an error, now I reckon we are left between
a choice of command end or some data which is ready to be collected */
/* I think we have to transfer data while the interrupt line is on and its
not any other type of interrupt */
diff -Nru a/drivers/acorn/net/ether3.c b/drivers/acorn/net/ether3.c
--- a/drivers/acorn/net/ether3.c Tue Mar 4 19:30:14 2003
+++ b/drivers/acorn/net/ether3.c Tue Mar 4 19:30:14 2003
@@ -101,7 +101,7 @@
/*
* ether3 read/write. Slow things down a bit...
- * The SEEQ8005 doesn't like us writing to it's registers
+ * The SEEQ8005 doesn't like us writing to its registers
* too quickly.
*/
static inline void ether3_outb(int v, const int r)
@@ -304,7 +304,7 @@
/*
* There is a problem with the NQ8005 in that it occasionally loses the
* last two bytes. To get round this problem, we receive the CRC as
- * well. That way, if we do loose the last two, then it doesn't matter.
+ * well. That way, if we do lose the last two, then it doesn't matter.
*/
ether3_outw(priv->regs.config1 | CFG1_TRANSEND, REG_CONFIG1);
ether3_outw((TX_END>>8) - 1, REG_BUFWIN);
diff -Nru a/drivers/acpi/Kconfig b/drivers/acpi/Kconfig
--- a/drivers/acpi/Kconfig Tue Mar 4 19:30:14 2003
+++ b/drivers/acpi/Kconfig Tue Mar 4 19:30:14 2003
@@ -6,6 +6,7 @@
config ACPI
bool "ACPI Support" if X86
+ depends on !X86_VISWS
default y if IA64 && (!IA64_HP_SIM || IA64_SGI_SN)
---help---
Advanced Configuration and Power Interface (ACPI) support for
diff -Nru a/drivers/acpi/acpi_ksyms.c b/drivers/acpi/acpi_ksyms.c
--- a/drivers/acpi/acpi_ksyms.c Tue Mar 4 19:30:06 2003
+++ b/drivers/acpi/acpi_ksyms.c Tue Mar 4 19:30:06 2003
@@ -76,6 +76,7 @@
EXPORT_SYMBOL(acpi_release_global_lock);
EXPORT_SYMBOL(acpi_get_current_resources);
EXPORT_SYMBOL(acpi_get_possible_resources);
+EXPORT_SYMBOL(acpi_walk_resources);
EXPORT_SYMBOL(acpi_set_current_resources);
EXPORT_SYMBOL(acpi_enable_event);
EXPORT_SYMBOL(acpi_disable_event);
@@ -86,6 +87,7 @@
EXPORT_SYMBOL(acpi_get_register);
EXPORT_SYMBOL(acpi_set_register);
EXPORT_SYMBOL(acpi_enter_sleep_state);
+EXPORT_SYMBOL(acpi_enter_sleep_state_s4bios);
EXPORT_SYMBOL(acpi_get_system_info);
EXPORT_SYMBOL(acpi_get_devices);
diff -Nru a/drivers/acpi/dispatcher/dsobject.c b/drivers/acpi/dispatcher/dsobject.c
--- a/drivers/acpi/dispatcher/dsobject.c Tue Mar 4 19:30:04 2003
+++ b/drivers/acpi/dispatcher/dsobject.c Tue Mar 4 19:30:04 2003
@@ -396,7 +396,7 @@
return_ACPI_STATUS (status);
}
- /* Re-type the object according to it's argument */
+ /* Re-type the object according to its argument */
node->type = ACPI_GET_OBJECT_TYPE (obj_desc);
diff -Nru a/drivers/acpi/ec.c b/drivers/acpi/ec.c
--- a/drivers/acpi/ec.c Tue Mar 4 19:30:09 2003
+++ b/drivers/acpi/ec.c Tue Mar 4 19:30:09 2003
@@ -644,15 +644,46 @@
}
+static acpi_status
+acpi_ec_io_ports (
+ struct acpi_resource *resource,
+ void *context)
+{
+ struct acpi_ec *ec = (struct acpi_ec *) context;
+ struct acpi_generic_address *addr;
+
+ if (resource->id != ACPI_RSTYPE_IO) {
+ return AE_OK;
+ }
+
+ /*
+ * The first address region returned is the data port, and
+ * the second address region returned is the status/command
+ * port.
+ */
+ if (ec->data_addr.register_bit_width == 0) {
+ addr = &ec->data_addr;
+ } else if (ec->command_addr.register_bit_width == 0) {
+ addr = &ec->command_addr;
+ } else {
+ return AE_CTRL_TERMINATE;
+ }
+
+ addr->address_space_id = ACPI_ADR_SPACE_SYSTEM_IO;
+ addr->register_bit_width = 8;
+ addr->register_bit_offset = 0;
+ addr->address = resource->data.io.min_base_address;
+
+ return AE_OK;
+}
+
+
static int
acpi_ec_start (
struct acpi_device *device)
{
- int result = 0;
acpi_status status = AE_OK;
struct acpi_ec *ec = NULL;
- struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL};
- struct acpi_resource *resource = NULL;
ACPI_FUNCTION_TRACE("acpi_ec_start");
@@ -667,33 +698,13 @@
/*
* Get I/O port addresses. Convert to GAS format.
*/
- status = acpi_get_current_resources(ec->handle, &buffer);
- if (ACPI_FAILURE(status)) {
+ status = acpi_walk_resources(ec->handle, METHOD_NAME__CRS,
+ acpi_ec_io_ports, ec);
+ if (ACPI_FAILURE(status) || ec->command_addr.register_bit_width == 0) {
ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Error getting I/O port addresses"));
return_VALUE(-ENODEV);
}
- resource = (struct acpi_resource *) buffer.pointer;
- if (!resource || (resource->id != ACPI_RSTYPE_IO)) {
- ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Invalid or missing resource\n"));
- result = -ENODEV;
- goto end;
- }
- ec->data_addr.address_space_id = ACPI_ADR_SPACE_SYSTEM_IO;
- ec->data_addr.register_bit_width = 8;
- ec->data_addr.register_bit_offset = 0;
- ec->data_addr.address = resource->data.io.min_base_address;
-
- resource = ACPI_NEXT_RESOURCE(resource);
- if (!resource || (resource->id != ACPI_RSTYPE_IO)) {
- ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Invalid or missing resource\n"));
- result = -ENODEV;
- goto end;
- }
- ec->command_addr.address_space_id = ACPI_ADR_SPACE_SYSTEM_IO;
- ec->command_addr.register_bit_width = 8;
- ec->command_addr.register_bit_offset = 0;
- ec->command_addr.address = resource->data.io.min_base_address;
ec->status_addr = ec->command_addr;
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "gpe=0x%02x, ports=0x%2x,0x%2x\n",
@@ -706,8 +717,7 @@
status = acpi_install_gpe_handler(ec->gpe_bit,
ACPI_EVENT_EDGE_TRIGGERED, &acpi_ec_gpe_handler, ec);
if (ACPI_FAILURE(status)) {
- result = -ENODEV;
- goto end;
+ return_VALUE(-ENODEV);
}
status = acpi_install_address_space_handler (ec->handle,
@@ -715,13 +725,10 @@
&acpi_ec_space_setup, ec);
if (ACPI_FAILURE(status)) {
acpi_remove_gpe_handler(ec->gpe_bit, &acpi_ec_gpe_handler);
- result = -ENODEV;
- goto end;
+ return_VALUE(-ENODEV);
}
-end:
- acpi_os_free(buffer.pointer);
- return_VALUE(result);
+ return_VALUE(AE_OK);
}
diff -Nru a/drivers/acpi/events/Makefile b/drivers/acpi/events/Makefile
--- a/drivers/acpi/events/Makefile Tue Mar 4 19:30:03 2003
+++ b/drivers/acpi/events/Makefile Tue Mar 4 19:30:03 2003
@@ -4,6 +4,6 @@
obj-y := evevent.o evregion.o evsci.o evxfevnt.o \
evmisc.o evrgnini.o evxface.o evxfregn.o \
- evgpe.o
+ evgpe.o evgpeblk.o
EXTRA_CFLAGS += $(ACPI_CFLAGS)
diff -Nru a/drivers/acpi/events/evevent.c b/drivers/acpi/events/evevent.c
--- a/drivers/acpi/events/evevent.c Tue Mar 4 19:30:13 2003
+++ b/drivers/acpi/events/evevent.c Tue Mar 4 19:30:13 2003
@@ -79,7 +79,7 @@
/*
* Initialize the Fixed and General Purpose acpi_events prior. This is
- * done prior to enabling SCIs to prevent interrupts from occuring
+ * done prior to enabling SCIs to prevent interrupts from occurring
* before handers are installed.
*/
status = acpi_ev_fixed_event_initialize ();
@@ -110,7 +110,7 @@
*
* RETURN: Status
*
- * DESCRIPTION: Install handlers for the SCI, Global Lock, and GPEs.
+ * DESCRIPTION: Install interrupt handlers for the SCI and Global Lock
*
******************************************************************************/
@@ -130,16 +130,6 @@
if (ACPI_FAILURE (status)) {
ACPI_REPORT_ERROR ((
"Unable to install System Control Interrupt Handler, %s\n",
- acpi_format_exception (status)));
- return_ACPI_STATUS (status);
- }
-
- /* Install handlers for control method GPE handlers (_Lxx, _Exx) */
-
- status = acpi_ev_init_gpe_control_methods ();
- if (ACPI_FAILURE (status)) {
- ACPI_REPORT_ERROR ((
- "Unable to initialize GPE control methods, %s\n",
acpi_format_exception (status)));
return_ACPI_STATUS (status);
}
diff -Nru a/drivers/acpi/events/evgpe.c b/drivers/acpi/events/evgpe.c
--- a/drivers/acpi/events/evgpe.c Tue Mar 4 19:30:10 2003
+++ b/drivers/acpi/events/evgpe.c Tue Mar 4 19:30:10 2003
@@ -51,401 +51,54 @@
/*******************************************************************************
*
- * FUNCTION: acpi_ev_gpe_initialize
+ * FUNCTION: acpi_ev_get_gpe_event_info
*
- * PARAMETERS: None
- *
- * RETURN: Status
+ * PARAMETERS: gpe_number - Raw GPE number
*
- * DESCRIPTION: Initialize the GPE data structures
+ * RETURN: None.
*
- ******************************************************************************/
-
-acpi_status
-acpi_ev_gpe_initialize (void)
-{
- acpi_native_uint i;
- acpi_native_uint j;
- u32 gpe_block;
- u32 gpe_register;
- u32 gpe_number_index;
- u32 gpe_number;
- struct acpi_gpe_register_info *gpe_register_info;
- acpi_status status;
-
-
- ACPI_FUNCTION_TRACE ("ev_gpe_initialize");
-
-
- /*
- * Initialize the GPE Block globals
- *
- * Why the GPE register block lengths are divided by 2: From the ACPI Spec,
- * section "General-Purpose Event Registers", we have:
- *
- * "Each register block contains two registers of equal length
- * GPEx_STS and GPEx_EN (where x is 0 or 1). The length of the
- * GPE0_STS and GPE0_EN registers is equal to half the GPE0_LEN
- * The length of the GPE1_STS and GPE1_EN registers is equal to
- * half the GPE1_LEN. If a generic register block is not supported
- * then its respective block pointer and block length values in the
- * FADT table contain zeros. The GPE0_LEN and GPE1_LEN do not need
- * to be the same size."
- */
- acpi_gbl_gpe_block_info[0].register_count = 0;
- acpi_gbl_gpe_block_info[1].register_count = 0;
-
- acpi_gbl_gpe_block_info[0].block_address = &acpi_gbl_FADT->xgpe0_blk;
- acpi_gbl_gpe_block_info[1].block_address = &acpi_gbl_FADT->xgpe1_blk;
-
- acpi_gbl_gpe_block_info[0].block_base_number = 0;
- acpi_gbl_gpe_block_info[1].block_base_number = acpi_gbl_FADT->gpe1_base;
-
-
- /*
- * Determine the maximum GPE number for this machine.
- *
- * Note: both GPE0 and GPE1 are optional, and either can exist without
- * the other.
- * If EITHER the register length OR the block address are zero, then that
- * particular block is not supported.
- */
- if (acpi_gbl_FADT->xgpe0_blk.register_bit_width && acpi_gbl_FADT->xgpe0_blk.address) {
- /* GPE block 0 exists (has both length and address > 0) */
-
- acpi_gbl_gpe_block_info[0].register_count = (u16) (acpi_gbl_FADT->xgpe0_blk.register_bit_width / (ACPI_GPE_REGISTER_WIDTH * 2));
- acpi_gbl_gpe_number_max = (acpi_gbl_gpe_block_info[0].register_count * ACPI_GPE_REGISTER_WIDTH) - 1;
- }
-
- if (acpi_gbl_FADT->xgpe1_blk.register_bit_width && acpi_gbl_FADT->xgpe1_blk.address) {
- /* GPE block 1 exists (has both length and address > 0) */
-
- acpi_gbl_gpe_block_info[1].register_count = (u16) (acpi_gbl_FADT->xgpe1_blk.register_bit_width / (ACPI_GPE_REGISTER_WIDTH * 2));
-
- /* Check for GPE0/GPE1 overlap (if both banks exist) */
-
- if ((acpi_gbl_gpe_block_info[0].register_count) &&
- (acpi_gbl_gpe_number_max >= acpi_gbl_FADT->gpe1_base)) {
- ACPI_REPORT_ERROR ((
- "GPE0 block (GPE 0 to %d) overlaps the GPE1 block (GPE %d to %d) - Ignoring GPE1\n",
- acpi_gbl_gpe_number_max, acpi_gbl_FADT->gpe1_base,
- acpi_gbl_FADT->gpe1_base + ((acpi_gbl_gpe_block_info[1].register_count * ACPI_GPE_REGISTER_WIDTH) - 1)));
-
- /* Ignore GPE1 block by setting the register count to zero */
-
- acpi_gbl_gpe_block_info[1].register_count = 0;
- }
- else {
- /*
- * GPE0 and GPE1 do not have to be contiguous in the GPE number space,
- * But, GPE0 always starts at zero.
- */
- acpi_gbl_gpe_number_max = acpi_gbl_FADT->gpe1_base +
- ((acpi_gbl_gpe_block_info[1].register_count * ACPI_GPE_REGISTER_WIDTH) - 1);
- }
- }
-
- /* Exit if there are no GPE registers */
-
- acpi_gbl_gpe_register_count = acpi_gbl_gpe_block_info[0].register_count +
- acpi_gbl_gpe_block_info[1].register_count;
- if (!acpi_gbl_gpe_register_count) {
- /* GPEs are not required by ACPI, this is OK */
-
- ACPI_REPORT_INFO (("There are no GPE blocks defined in the FADT\n"));
- return_ACPI_STATUS (AE_OK);
- }
-
- /* Check for Max GPE number out-of-range */
-
- if (acpi_gbl_gpe_number_max > ACPI_GPE_MAX) {
- ACPI_REPORT_ERROR (("Maximum GPE number from FADT is too large: 0x%X\n",
- acpi_gbl_gpe_number_max));
- return_ACPI_STATUS (AE_BAD_VALUE);
- }
-
- /* Allocate the GPE number-to-index translation table */
-
- acpi_gbl_gpe_number_to_index = ACPI_MEM_CALLOCATE (
- sizeof (struct acpi_gpe_index_info) *
- ((acpi_size) acpi_gbl_gpe_number_max + 1));
- if (!acpi_gbl_gpe_number_to_index) {
- ACPI_DEBUG_PRINT ((ACPI_DB_ERROR,
- "Could not allocate the gpe_number_to_index table\n"));
- return_ACPI_STATUS (AE_NO_MEMORY);
- }
-
- /* Set the Gpe index table to GPE_INVALID */
-
- ACPI_MEMSET (acpi_gbl_gpe_number_to_index, (int) ACPI_GPE_INVALID,
- sizeof (struct acpi_gpe_index_info) * ((acpi_size) acpi_gbl_gpe_number_max + 1));
-
- /* Allocate the GPE register information block */
-
- acpi_gbl_gpe_register_info = ACPI_MEM_CALLOCATE (
- (acpi_size) acpi_gbl_gpe_register_count *
- sizeof (struct acpi_gpe_register_info));
- if (!acpi_gbl_gpe_register_info) {
- ACPI_DEBUG_PRINT ((ACPI_DB_ERROR,
- "Could not allocate the gpe_register_info table\n"));
- goto error_exit1;
- }
-
- /*
- * Allocate the GPE dispatch handler block. There are eight distinct GPEs
- * per register. Initialization to zeros is sufficient.
- */
- acpi_gbl_gpe_number_info = ACPI_MEM_CALLOCATE (
- ((acpi_size) acpi_gbl_gpe_register_count * ACPI_GPE_REGISTER_WIDTH) *
- sizeof (struct acpi_gpe_number_info));
- if (!acpi_gbl_gpe_number_info) {
- ACPI_DEBUG_PRINT ((ACPI_DB_ERROR, "Could not allocate the gpe_number_info table\n"));
- goto error_exit2;
- }
-
- /*
- * Initialize the GPE information and validation tables. A goal of these
- * tables is to hide the fact that there are two separate GPE register sets
- * in a given gpe hardware block, the status registers occupy the first half,
- * and the enable registers occupy the second half. Another goal is to hide
- * the fact that there may be multiple GPE hardware blocks.
- */
- gpe_register = 0;
- gpe_number_index = 0;
-
- for (gpe_block = 0; gpe_block < ACPI_MAX_GPE_BLOCKS; gpe_block++) {
- for (i = 0; i < acpi_gbl_gpe_block_info[gpe_block].register_count; i++) {
- gpe_register_info = &acpi_gbl_gpe_register_info[gpe_register];
-
- /* Init the Register info for this entire GPE register (8 GPEs) */
-
- gpe_register_info->base_gpe_number = (u8) (acpi_gbl_gpe_block_info[gpe_block].block_base_number
- + (i * ACPI_GPE_REGISTER_WIDTH));
-
- ACPI_STORE_ADDRESS (gpe_register_info->status_address.address,
- (acpi_gbl_gpe_block_info[gpe_block].block_address->address
- + i));
-
- ACPI_STORE_ADDRESS (gpe_register_info->enable_address.address,
- (acpi_gbl_gpe_block_info[gpe_block].block_address->address
- + i
- + acpi_gbl_gpe_block_info[gpe_block].register_count));
-
- gpe_register_info->status_address.address_space_id = acpi_gbl_gpe_block_info[gpe_block].block_address->address_space_id;
- gpe_register_info->enable_address.address_space_id = acpi_gbl_gpe_block_info[gpe_block].block_address->address_space_id;
- gpe_register_info->status_address.register_bit_width = ACPI_GPE_REGISTER_WIDTH;
- gpe_register_info->enable_address.register_bit_width = ACPI_GPE_REGISTER_WIDTH;
- gpe_register_info->status_address.register_bit_offset = ACPI_GPE_REGISTER_WIDTH;
- gpe_register_info->enable_address.register_bit_offset = ACPI_GPE_REGISTER_WIDTH;
-
- /* Init the Index mapping info for each GPE number within this register */
-
- for (j = 0; j < ACPI_GPE_REGISTER_WIDTH; j++) {
- gpe_number = gpe_register_info->base_gpe_number + (u32) j;
- acpi_gbl_gpe_number_to_index[gpe_number].number_index = (u8) gpe_number_index;
-
- acpi_gbl_gpe_number_info[gpe_number_index].bit_mask = acpi_gbl_decode_to8bit[j];
- gpe_number_index++;
- }
-
- /*
- * Clear the status/enable registers. Note that status registers
- * are cleared by writing a '1', while enable registers are cleared
- * by writing a '0'.
- */
- status = acpi_hw_low_level_write (ACPI_GPE_REGISTER_WIDTH, 0x00, &gpe_register_info->enable_address, 0);
- if (ACPI_FAILURE (status)) {
- return_ACPI_STATUS (status);
- }
-
- status = acpi_hw_low_level_write (ACPI_GPE_REGISTER_WIDTH, 0xFF, &gpe_register_info->status_address, 0);
- if (ACPI_FAILURE (status)) {
- return_ACPI_STATUS (status);
- }
-
- gpe_register++;
- }
-
- if (i) {
- /* Dump info about this valid GPE block */
-
- ACPI_DEBUG_PRINT ((ACPI_DB_INFO, "GPE Block%d: %X registers at %8.8X%8.8X\n",
- (s32) gpe_block, acpi_gbl_gpe_block_info[0].register_count,
- ACPI_HIDWORD (acpi_gbl_gpe_block_info[gpe_block].block_address->address),
- ACPI_LODWORD (acpi_gbl_gpe_block_info[gpe_block].block_address->address)));
-
- ACPI_DEBUG_PRINT ((ACPI_DB_INFO, "GPE Block%d defined as GPE%d to GPE%d\n",
- (s32) gpe_block,
- (u32) acpi_gbl_gpe_block_info[gpe_block].block_base_number,
- (u32) (acpi_gbl_gpe_block_info[gpe_block].block_base_number +
- ((acpi_gbl_gpe_block_info[gpe_block].register_count * ACPI_GPE_REGISTER_WIDTH) -1))));
- }
- }
-
- return_ACPI_STATUS (AE_OK);
-
-
- /* Error cleanup */
-
-error_exit2:
- ACPI_MEM_FREE (acpi_gbl_gpe_register_info);
-
-error_exit1:
- ACPI_MEM_FREE (acpi_gbl_gpe_number_to_index);
- return_ACPI_STATUS (AE_NO_MEMORY);
-}
-
-
-/*******************************************************************************
+ * DESCRIPTION: Returns the event_info struct
+ * associated with this GPE.
*
- * FUNCTION: acpi_ev_save_method_info
- *
- * PARAMETERS: None
- *
- * RETURN: None
- *
- * DESCRIPTION: Called from acpi_walk_namespace. Expects each object to be a
- * control method under the _GPE portion of the namespace.
- * Extract the name and GPE type from the object, saving this
- * information for quick lookup during GPE dispatch
- *
- * The name of each GPE control method is of the form:
- * "_Lnn" or "_Enn"
- * Where:
- * L - means that the GPE is level triggered
- * E - means that the GPE is edge triggered
- * nn - is the GPE number [in HEX]
+ * TBD: this function will go away when full support of GPE block devices
+ * is implemented!
*
******************************************************************************/
-static acpi_status
-acpi_ev_save_method_info (
- acpi_handle obj_handle,
- u32 level,
- void *obj_desc,
- void **return_value)
+struct acpi_gpe_event_info *
+acpi_ev_get_gpe_event_info (
+ u32 gpe_number)
{
- u32 gpe_number;
- struct acpi_gpe_number_info *gpe_number_info;
- char name[ACPI_NAME_SIZE + 1];
- u8 type;
- acpi_status status;
-
-
- ACPI_FUNCTION_NAME ("ev_save_method_info");
-
-
- /* Extract the name from the object and convert to a string */
+ struct acpi_gpe_block_info *gpe_block;
- ACPI_MOVE_UNALIGNED32_TO_32 (name,
- &((struct acpi_namespace_node *) obj_handle)->name.integer);
- name[ACPI_NAME_SIZE] = 0;
- /*
- * Edge/Level determination is based on the 2nd character of the method name
- */
- switch (name[1]) {
- case 'L':
- type = ACPI_EVENT_LEVEL_TRIGGERED;
- break;
-
- case 'E':
- type = ACPI_EVENT_EDGE_TRIGGERED;
- break;
+ /* Examine GPE Block 0 */
- default:
- /* Unknown method type, just ignore it! */
-
- ACPI_DEBUG_PRINT ((ACPI_DB_ERROR,
- "Unknown GPE method type: %s (name not of form _Lnn or _Enn)\n",
- name));
- return (AE_OK);
+ gpe_block = acpi_gbl_gpe_block_list_head;
+ if (!gpe_block) {
+ return (NULL);
}
- /* Convert the last two characters of the name to the GPE Number */
-
- gpe_number = ACPI_STRTOUL (&name[2], NULL, 16);
- if (gpe_number == ACPI_UINT32_MAX) {
- /* Conversion failed; invalid method, just ignore it */
-
- ACPI_DEBUG_PRINT ((ACPI_DB_ERROR,
- "Could not extract GPE number from name: %s (name not of form _Lnn or _Enn)\n",
- name));
- return (AE_OK);
+ if ((gpe_number >= gpe_block->block_base_number) &&
+ (gpe_number < gpe_block->block_base_number + (gpe_block->register_count * 8))) {
+ return (&gpe_block->event_info[gpe_number - gpe_block->block_base_number]);
}
- /* Get GPE index and ensure that we have a valid GPE number */
-
- gpe_number_info = acpi_ev_get_gpe_number_info (gpe_number);
- if (!gpe_number_info) {
- /* Not valid, all we can do here is ignore it */
+ /* Examine GPE Block 1 */
- ACPI_DEBUG_PRINT ((ACPI_DB_ERROR,
- "GPE number associated with method is not valid %s\n",
- name));
- return (AE_OK);
+ gpe_block = gpe_block->next;
+ if (!gpe_block) {
+ return (NULL);
}
- /*
- * Now we can add this information to the gpe_number_info block
- * for use during dispatch of this GPE.
- */
- gpe_number_info->type = type;
- gpe_number_info->method_node = (struct acpi_namespace_node *) obj_handle;
-
- /*
- * Enable the GPE (SCIs should be disabled at this point)
- */
- status = acpi_hw_enable_gpe (gpe_number);
- if (ACPI_FAILURE (status)) {
- return (status);
+ if ((gpe_number >= gpe_block->block_base_number) &&
+ (gpe_number < gpe_block->block_base_number + (gpe_block->register_count * 8))) {
+ return (&gpe_block->event_info[gpe_number - gpe_block->block_base_number]);
}
- ACPI_DEBUG_PRINT ((ACPI_DB_INFO, "Registered GPE method %s as GPE number %2.2X\n",
- name, gpe_number));
- return (AE_OK);
+ return (NULL);
}
-
-/*******************************************************************************
- *
- * FUNCTION: acpi_ev_init_gpe_control_methods
- *
- * PARAMETERS: None
- *
- * RETURN: Status
- *
- * DESCRIPTION: Obtain the control methods associated with the GPEs.
- * NOTE: Must be called AFTER namespace initialization!
- *
- ******************************************************************************/
-
-acpi_status
-acpi_ev_init_gpe_control_methods (void)
-{
- acpi_status status;
-
-
- ACPI_FUNCTION_TRACE ("ev_init_gpe_control_methods");
-
-
- /* Get a permanent handle to the _GPE object */
-
- status = acpi_get_handle (NULL, "\\_GPE", &acpi_gbl_gpe_obj_handle);
- if (ACPI_FAILURE (status)) {
- return_ACPI_STATUS (status);
- }
-
- /* Traverse the namespace under \_GPE to find all methods there */
-
- status = acpi_walk_namespace (ACPI_TYPE_METHOD, acpi_gbl_gpe_obj_handle,
- ACPI_UINT32_MAX, acpi_ev_save_method_info,
- NULL, NULL);
-
- return_ACPI_STATUS (status);
-}
-
-
/*******************************************************************************
*
* FUNCTION: acpi_ev_gpe_detect
@@ -470,62 +123,74 @@
struct acpi_gpe_register_info *gpe_register_info;
u32 in_value;
acpi_status status;
+ struct acpi_gpe_block_info *gpe_block;
ACPI_FUNCTION_NAME ("ev_gpe_detect");
- /*
- * Read all of the 8-bit GPE status and enable registers
- * in both of the register blocks, saving all of it.
- * Find all currently active GP events.
- */
- for (i = 0; i < acpi_gbl_gpe_register_count; i++) {
- gpe_register_info = &acpi_gbl_gpe_register_info[i];
+ /* Examine all GPE blocks attached to this interrupt level */
- status = acpi_hw_low_level_read (ACPI_GPE_REGISTER_WIDTH, &in_value, &gpe_register_info->status_address, 0);
- gpe_register_info->status = (u8) in_value;
- if (ACPI_FAILURE (status)) {
- return (ACPI_INTERRUPT_NOT_HANDLED);
- }
+ gpe_block = acpi_gbl_gpe_block_list_head;
+ while (gpe_block) {
+ /*
+ * Read all of the 8-bit GPE status and enable registers
+ * in this GPE block, saving all of them.
+ * Find all currently active GP events.
+ */
+ for (i = 0; i < gpe_block->register_count; i++) {
+ /* Get the next status/enable pair */
- status = acpi_hw_low_level_read (ACPI_GPE_REGISTER_WIDTH, &in_value, &gpe_register_info->enable_address, 0);
- gpe_register_info->enable = (u8) in_value;
- if (ACPI_FAILURE (status)) {
- return (ACPI_INTERRUPT_NOT_HANDLED);
- }
+ gpe_register_info = &gpe_block->register_info[i];
+
+ status = acpi_hw_low_level_read (ACPI_GPE_REGISTER_WIDTH, &in_value,
+ &gpe_register_info->status_address, 0);
+ gpe_register_info->status = (u8) in_value;
+ if (ACPI_FAILURE (status)) {
+ return (ACPI_INTERRUPT_NOT_HANDLED);
+ }
- ACPI_DEBUG_PRINT ((ACPI_DB_INTERRUPTS,
- "GPE block at %8.8X%8.8X - Values: Enable %02X Status %02X\n",
- ACPI_HIDWORD (gpe_register_info->enable_address.address),
- ACPI_LODWORD (gpe_register_info->enable_address.address),
- gpe_register_info->enable,
- gpe_register_info->status));
-
- /* First check if there is anything active at all in this register */
-
- enabled_status_byte = (u8) (gpe_register_info->status &
- gpe_register_info->enable);
- if (!enabled_status_byte) {
- /* No active GPEs in this register, move on */
+ status = acpi_hw_low_level_read (ACPI_GPE_REGISTER_WIDTH, &in_value,
+ &gpe_register_info->enable_address, 0);
+ gpe_register_info->enable = (u8) in_value;
+ if (ACPI_FAILURE (status)) {
+ return (ACPI_INTERRUPT_NOT_HANDLED);
+ }
- continue;
- }
+ ACPI_DEBUG_PRINT ((ACPI_DB_INTERRUPTS,
+ "GPE block at %8.8X%8.8X - Values: Enable %02X Status %02X\n",
+ ACPI_HIDWORD (gpe_register_info->enable_address.address),
+ ACPI_LODWORD (gpe_register_info->enable_address.address),
+ gpe_register_info->enable,
+ gpe_register_info->status));
+
+ /* First check if there is anything active at all in this register */
+
+ enabled_status_byte = (u8) (gpe_register_info->status &
+ gpe_register_info->enable);
+ if (!enabled_status_byte) {
+ /* No active GPEs in this register, move on */
+
+ continue;
+ }
- /* Now look at the individual GPEs in this byte register */
+ /* Now look at the individual GPEs in this byte register */
- for (j = 0, bit_mask = 1; j < ACPI_GPE_REGISTER_WIDTH; j++, bit_mask <<= 1) {
- /* Examine one GPE bit */
+ for (j = 0, bit_mask = 1; j < ACPI_GPE_REGISTER_WIDTH; j++, bit_mask <<= 1) {
+ /* Examine one GPE bit */
- if (enabled_status_byte & bit_mask) {
- /*
- * Found an active GPE. Dispatch the event to a handler
- * or method.
- */
- int_status |= acpi_ev_gpe_dispatch (
- gpe_register_info->base_gpe_number + j);
+ if (enabled_status_byte & bit_mask) {
+ /*
+ * Found an active GPE. Dispatch the event to a handler
+ * or method.
+ */
+ int_status |= acpi_ev_gpe_dispatch (
+ &gpe_block->event_info[(i * ACPI_GPE_REGISTER_WIDTH) +j]);
+ }
}
}
+
+ gpe_block = gpe_block->next;
}
return (int_status);
@@ -536,7 +201,7 @@
*
* FUNCTION: acpi_ev_asynch_execute_gpe_method
*
- * PARAMETERS: gpe_number - The 0-based GPE number
+ * PARAMETERS: gpe_event_info - Info for this GPE
*
* RETURN: None
*
@@ -552,20 +217,14 @@
acpi_ev_asynch_execute_gpe_method (
void *context)
{
- u32 gpe_number = (u32) ACPI_TO_INTEGER (context);
- u32 gpe_number_index;
- struct acpi_gpe_number_info gpe_number_info;
+ struct acpi_gpe_event_info *gpe_event_info = (void *) context;
+ u32 gpe_number = 0;
acpi_status status;
ACPI_FUNCTION_TRACE ("ev_asynch_execute_gpe_method");
- gpe_number_index = acpi_ev_get_gpe_number_index (gpe_number);
- if (gpe_number_index == ACPI_GPE_INVALID) {
- return_VOID;
- }
-
/*
* Take a snapshot of the GPE info for this level - we copy the
* info to prevent a race condition with remove_handler.
@@ -575,40 +234,38 @@
return_VOID;
}
- gpe_number_info = acpi_gbl_gpe_number_info [gpe_number_index];
status = acpi_ut_release_mutex (ACPI_MTX_EVENTS);
if (ACPI_FAILURE (status)) {
return_VOID;
}
- if (gpe_number_info.method_node) {
+ if (gpe_event_info->method_node) {
/*
* Invoke the GPE Method (_Lxx, _Exx):
* (Evaluate the _Lxx/_Exx control method that corresponds to this GPE.)
*/
- status = acpi_ns_evaluate_by_handle (gpe_number_info.method_node, NULL, NULL);
+ status = acpi_ns_evaluate_by_handle (gpe_event_info->method_node, NULL, NULL);
if (ACPI_FAILURE (status)) {
ACPI_REPORT_ERROR (("%s while evaluating method [%4.4s] for GPE[%2.2X]\n",
acpi_format_exception (status),
- gpe_number_info.method_node->name.ascii, gpe_number));
+ gpe_event_info->method_node->name.ascii, gpe_number));
}
}
- if (gpe_number_info.type & ACPI_EVENT_LEVEL_TRIGGERED) {
+ if (gpe_event_info->type & ACPI_EVENT_LEVEL_TRIGGERED) {
/*
* GPE is level-triggered, we clear the GPE status bit after handling
* the event.
*/
- status = acpi_hw_clear_gpe (gpe_number);
+ status = acpi_hw_clear_gpe (gpe_event_info);
if (ACPI_FAILURE (status)) {
return_VOID;
}
}
- /*
- * Enable the GPE.
- */
- (void) acpi_hw_enable_gpe (gpe_number);
+ /* Enable this GPE */
+
+ (void) acpi_hw_enable_gpe (gpe_event_info);
return_VOID;
}
@@ -617,7 +274,7 @@
*
* FUNCTION: acpi_ev_gpe_dispatch
*
- * PARAMETERS: gpe_number - The 0-based GPE number
+ * PARAMETERS: gpe_event_info - info for this GPE
*
* RETURN: INTERRUPT_HANDLED or INTERRUPT_NOT_HANDLED
*
@@ -629,9 +286,9 @@
u32
acpi_ev_gpe_dispatch (
- u32 gpe_number)
+ struct acpi_gpe_event_info *gpe_event_info)
{
- struct acpi_gpe_number_info *gpe_number_info;
+ u32 gpe_number = 0; /* TBD: remove */
acpi_status status;
@@ -639,23 +296,14 @@
/*
- * We don't have to worry about mutex on gpe_number_info because we are
- * executing at interrupt level.
- */
- gpe_number_info = acpi_ev_get_gpe_number_info (gpe_number);
- if (!gpe_number_info) {
- ACPI_DEBUG_PRINT ((ACPI_DB_ERROR, "GPE[%X] is not a valid event\n", gpe_number));
- return_VALUE (ACPI_INTERRUPT_NOT_HANDLED);
- }
-
- /*
* If edge-triggered, clear the GPE status bit now. Note that
* level-triggered events are cleared after the GPE is serviced.
*/
- if (gpe_number_info->type & ACPI_EVENT_EDGE_TRIGGERED) {
- status = acpi_hw_clear_gpe (gpe_number);
+ if (gpe_event_info->type & ACPI_EVENT_EDGE_TRIGGERED) {
+ status = acpi_hw_clear_gpe (gpe_event_info);
if (ACPI_FAILURE (status)) {
- ACPI_REPORT_ERROR (("acpi_ev_gpe_dispatch: Unable to clear GPE[%2.2X]\n", gpe_number));
+ ACPI_REPORT_ERROR (("acpi_ev_gpe_dispatch: Unable to clear GPE[%2.2X]\n",
+ gpe_number));
return_VALUE (ACPI_INTERRUPT_NOT_HANDLED);
}
}
@@ -667,19 +315,20 @@
* If there is neither a handler nor a method, we disable the level to
* prevent further events from coming in here.
*/
- if (gpe_number_info->handler) {
+ if (gpe_event_info->handler) {
/* Invoke the installed handler (at interrupt level) */
- gpe_number_info->handler (gpe_number_info->context);
+ gpe_event_info->handler (gpe_event_info->context);
}
- else if (gpe_number_info->method_node) {
+ else if (gpe_event_info->method_node) {
/*
* Disable GPE, so it doesn't keep firing before the method has a
* chance to run.
*/
- status = acpi_hw_disable_gpe (gpe_number);
+ status = acpi_hw_disable_gpe (gpe_event_info);
if (ACPI_FAILURE (status)) {
- ACPI_REPORT_ERROR (("acpi_ev_gpe_dispatch: Unable to disable GPE[%2.2X]\n", gpe_number));
+ ACPI_REPORT_ERROR (("acpi_ev_gpe_dispatch: Unable to disable GPE[%2.2X]\n",
+ gpe_number));
return_VALUE (ACPI_INTERRUPT_NOT_HANDLED);
}
@@ -688,22 +337,27 @@
*/
if (ACPI_FAILURE (acpi_os_queue_for_execution (OSD_PRIORITY_GPE,
acpi_ev_asynch_execute_gpe_method,
- ACPI_TO_POINTER (gpe_number)))) {
- ACPI_REPORT_ERROR (("acpi_ev_gpe_dispatch: Unable to queue handler for GPE[%2.2X], event is disabled\n", gpe_number));
+ gpe_event_info))) {
+ ACPI_REPORT_ERROR ((
+ "acpi_ev_gpe_dispatch: Unable to queue handler for GPE[%2.2X], event is disabled\n",
+ gpe_number));
}
}
else {
/* No handler or method to run! */
- ACPI_REPORT_ERROR (("acpi_ev_gpe_dispatch: No handler or method for GPE[%2.2X], disabling event\n", gpe_number));
+ ACPI_REPORT_ERROR ((
+ "acpi_ev_gpe_dispatch: No handler or method for GPE[%2.2X], disabling event\n",
+ gpe_number));
/*
* Disable the GPE. The GPE will remain disabled until the ACPI
* Core Subsystem is restarted, or the handler is reinstalled.
*/
- status = acpi_hw_disable_gpe (gpe_number);
+ status = acpi_hw_disable_gpe (gpe_event_info);
if (ACPI_FAILURE (status)) {
- ACPI_REPORT_ERROR (("acpi_ev_gpe_dispatch: Unable to disable GPE[%2.2X]\n", gpe_number));
+ ACPI_REPORT_ERROR (("acpi_ev_gpe_dispatch: Unable to disable GPE[%2.2X]\n",
+ gpe_number));
return_VALUE (ACPI_INTERRUPT_NOT_HANDLED);
}
}
@@ -711,10 +365,11 @@
/*
* It is now safe to clear level-triggered evnets.
*/
- if (gpe_number_info->type & ACPI_EVENT_LEVEL_TRIGGERED) {
- status = acpi_hw_clear_gpe (gpe_number);
+ if (gpe_event_info->type & ACPI_EVENT_LEVEL_TRIGGERED) {
+ status = acpi_hw_clear_gpe (gpe_event_info);
if (ACPI_FAILURE (status)) {
- ACPI_REPORT_ERROR (("acpi_ev_gpe_dispatch: Unable to clear GPE[%2.2X]\n", gpe_number));
+ ACPI_REPORT_ERROR (("acpi_ev_gpe_dispatch: Unable to clear GPE[%2.2X]\n",
+ gpe_number));
return_VALUE (ACPI_INTERRUPT_NOT_HANDLED);
}
}
diff -Nru a/drivers/acpi/events/evgpeblk.c b/drivers/acpi/events/evgpeblk.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/drivers/acpi/events/evgpeblk.c Tue Mar 4 19:30:14 2003
@@ -0,0 +1,545 @@
+/******************************************************************************
+ *
+ * Module Name: evgpeblk - GPE block creation and initialization.
+ *
+ *****************************************************************************/
+
+/*
+ * Copyright (C) 2000 - 2003, R. Byron Moore
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions, and the following disclaimer,
+ * without modification.
+ * 2. Redistributions in binary form must reproduce at minimum a disclaimer
+ * substantially similar to the "NO WARRANTY" disclaimer below
+ * ("Disclaimer") and any redistribution must be conditioned upon
+ * including a substantially similar Disclaimer requirement for further
+ * binary redistribution.
+ * 3. Neither the names of the above-listed copyright holders nor the names
+ * of any contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * Alternatively, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") version 2 as published by the Free
+ * Software Foundation.
+ *
+ * NO WARRANTY
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
+ * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGES.
+ */
+
+#include
+#include
+#include
+
+#define _COMPONENT ACPI_EVENTS
+ ACPI_MODULE_NAME ("evgpe")
+
+
+/*******************************************************************************
+ *
+ * FUNCTION: acpi_ev_save_method_info
+ *
+ * PARAMETERS: Callback from walk_namespace
+ *
+ * RETURN: None
+ *
+ * DESCRIPTION: Called from acpi_walk_namespace. Expects each object to be a
+ * control method under the _GPE portion of the namespace.
+ * Extract the name and GPE type from the object, saving this
+ * information for quick lookup during GPE dispatch
+ *
+ * The name of each GPE control method is of the form:
+ * "_Lnn" or "_Enn"
+ * Where:
+ * L - means that the GPE is level triggered
+ * E - means that the GPE is edge triggered
+ * nn - is the GPE number [in HEX]
+ *
+ ******************************************************************************/
+
+static acpi_status
+acpi_ev_save_method_info (
+ acpi_handle obj_handle,
+ u32 level,
+ void *obj_desc,
+ void **return_value)
+{
+ struct acpi_gpe_block_info *gpe_block = (void *) obj_desc;
+ struct acpi_gpe_event_info *gpe_event_info;
+ u32 gpe_number;
+ char name[ACPI_NAME_SIZE + 1];
+ u8 type;
+ acpi_status status;
+
+
+ ACPI_FUNCTION_NAME ("ev_save_method_info");
+
+
+ /* Extract the name from the object and convert to a string */
+
+ ACPI_MOVE_UNALIGNED32_TO_32 (name,
+ &((struct acpi_namespace_node *) obj_handle)->name.integer);
+ name[ACPI_NAME_SIZE] = 0;
+
+ /*
+ * Edge/Level determination is based on the 2nd character of the method name
+ */
+ switch (name[1]) {
+ case 'L':
+ type = ACPI_EVENT_LEVEL_TRIGGERED;
+ break;
+
+ case 'E':
+ type = ACPI_EVENT_EDGE_TRIGGERED;
+ break;
+
+ default:
+ /* Unknown method type, just ignore it! */
+
+ ACPI_DEBUG_PRINT ((ACPI_DB_ERROR,
+ "Unknown GPE method type: %s (name not of form _Lnn or _Enn)\n",
+ name));
+ return (AE_OK);
+ }
+
+ /* Convert the last two characters of the name to the GPE Number */
+
+ gpe_number = ACPI_STRTOUL (&name[2], NULL, 16);
+ if (gpe_number == ACPI_UINT32_MAX) {
+ /* Conversion failed; invalid method, just ignore it */
+
+ ACPI_DEBUG_PRINT ((ACPI_DB_ERROR,
+ "Could not extract GPE number from name: %s (name is not of form _Lnn or _Enn)\n",
+ name));
+ return (AE_OK);
+ }
+
+ /* Ensure that we have a valid GPE number for this GPE block */
+
+ if ((gpe_number < gpe_block->block_base_number) ||
+ (gpe_number >= (gpe_block->register_count * 8))) {
+ /* Not valid, all we can do here is ignore it */
+
+ ACPI_DEBUG_PRINT ((ACPI_DB_ERROR,
+ "GPE number associated with method %s is not valid\n", name));
+ return (AE_OK);
+ }
+
+ /*
+ * Now we can add this information to the gpe_event_info block
+ * for use during dispatch of this GPE.
+ */
+ gpe_event_info = &gpe_block->event_info[gpe_number - gpe_block->block_base_number];
+
+ gpe_event_info->type = type;
+ gpe_event_info->method_node = (struct acpi_namespace_node *) obj_handle;
+
+ /*
+ * Enable the GPE (SCIs should be disabled at this point)
+ */
+ status = acpi_hw_enable_gpe (gpe_event_info);
+ if (ACPI_FAILURE (status)) {
+ return (status);
+ }
+
+ ACPI_DEBUG_PRINT ((ACPI_DB_INFO, "Registered GPE method %s as GPE number %2.2X\n",
+ name, gpe_number));
+ return (AE_OK);
+}
+
+
+/*******************************************************************************
+ *
+ * FUNCTION: acpi_ev_install_gpe_block
+ *
+ * PARAMETERS: gpe_block - New GPE block
+ *
+ * RETURN: Status
+ *
+ * DESCRIPTION: Install new GPE block with mutex support
+ *
+ ******************************************************************************/
+
+acpi_status
+acpi_ev_install_gpe_block (
+ struct acpi_gpe_block_info *gpe_block)
+{
+ struct acpi_gpe_block_info *next_gpe_block;
+ acpi_status status;
+
+
+ status = acpi_ut_acquire_mutex (ACPI_MTX_EVENTS);
+ if (ACPI_FAILURE (status)) {
+ return (status);
+ }
+
+ /* Install the new block at the end of the global list */
+
+ if (acpi_gbl_gpe_block_list_head) {
+ next_gpe_block = acpi_gbl_gpe_block_list_head;
+ while (next_gpe_block->next) {
+ next_gpe_block = next_gpe_block->next;
+ }
+
+ next_gpe_block->next = gpe_block;
+ gpe_block->previous = next_gpe_block;
+ }
+ else {
+ acpi_gbl_gpe_block_list_head = gpe_block;
+ }
+
+ status = acpi_ut_release_mutex (ACPI_MTX_EVENTS);
+ return (status);
+}
+
+
+/*******************************************************************************
+ *
+ * FUNCTION: acpi_ev_create_gpe_info_blocks
+ *
+ * PARAMETERS: gpe_block - New GPE block
+ *
+ * RETURN: Status
+ *
+ * DESCRIPTION: Create the register_info and event_info blocks for this GPE block
+ *
+ ******************************************************************************/
+
+acpi_status
+acpi_ev_create_gpe_info_blocks (
+ struct acpi_gpe_block_info *gpe_block)
+{
+ struct acpi_gpe_register_info *gpe_register_info = NULL;
+ struct acpi_gpe_event_info *gpe_event_info = NULL;
+ struct acpi_gpe_event_info *this_event;
+ struct acpi_gpe_register_info *this_register;
+ acpi_native_uint i;
+ acpi_native_uint j;
+ acpi_status status;
+
+
+ ACPI_FUNCTION_TRACE ("ev_create_gpe_info_blocks");
+
+
+ /* Allocate the GPE register information block */
+
+ gpe_register_info = ACPI_MEM_CALLOCATE (
+ (acpi_size) gpe_block->register_count *
+ sizeof (struct acpi_gpe_register_info));
+ if (!gpe_register_info) {
+ ACPI_DEBUG_PRINT ((ACPI_DB_ERROR,
+ "Could not allocate the gpe_register_info table\n"));
+ return_ACPI_STATUS (AE_NO_MEMORY);
+ }
+
+ /*
+ * Allocate the GPE event_info block. There are eight distinct GPEs
+ * per register. Initialization to zeros is sufficient.
+ */
+ gpe_event_info = ACPI_MEM_CALLOCATE (
+ ((acpi_size) gpe_block->register_count * ACPI_GPE_REGISTER_WIDTH) *
+ sizeof (struct acpi_gpe_event_info));
+ if (!gpe_event_info) {
+ ACPI_DEBUG_PRINT ((ACPI_DB_ERROR, "Could not allocate the gpe_event_info table\n"));
+ status = AE_NO_MEMORY;
+ goto error_exit;
+ }
+
+ /*
+ * Initialize the GPE Register and Event structures. A goal of these
+ * tables is to hide the fact that there are two separate GPE register sets
+ * in a given gpe hardware block, the status registers occupy the first half,
+ * and the enable registers occupy the second half. Another goal is to hide
+ * the fact that there may be multiple GPE hardware blocks.
+ */
+ this_register = gpe_register_info;
+ this_event = gpe_event_info;
+
+ for (i = 0; i < gpe_block->register_count; i++) {
+ /* Init the register_info for this GPE register (8 GPEs) */
+
+ this_register->base_gpe_number = (u8) (gpe_block->block_base_number +
+ (i * ACPI_GPE_REGISTER_WIDTH));
+
+ ACPI_STORE_ADDRESS (this_register->status_address.address,
+ (gpe_block->block_address.address
+ + i));
+
+ ACPI_STORE_ADDRESS (this_register->enable_address.address,
+ (gpe_block->block_address.address
+ + i
+ + gpe_block->register_count));
+
+ this_register->status_address.address_space_id = gpe_block->block_address.address_space_id;
+ this_register->enable_address.address_space_id = gpe_block->block_address.address_space_id;
+ this_register->status_address.register_bit_width = ACPI_GPE_REGISTER_WIDTH;
+ this_register->enable_address.register_bit_width = ACPI_GPE_REGISTER_WIDTH;
+ this_register->status_address.register_bit_offset = ACPI_GPE_REGISTER_WIDTH;
+ this_register->enable_address.register_bit_offset = ACPI_GPE_REGISTER_WIDTH;
+
+ /* Init the event_info for each GPE within this register */
+
+ for (j = 0; j < ACPI_GPE_REGISTER_WIDTH; j++) {
+ this_event->bit_mask = acpi_gbl_decode_to8bit[j];
+ this_event->register_info = this_register;
+ this_event++;
+ }
+
+ /*
+ * Clear the status/enable registers. Note that status registers
+ * are cleared by writing a '1', while enable registers are cleared
+ * by writing a '0'.
+ */
+ status = acpi_hw_low_level_write (ACPI_GPE_REGISTER_WIDTH, 0x00,
+ &this_register->enable_address, 0);
+ if (ACPI_FAILURE (status)) {
+ goto error_exit;
+ }
+
+ status = acpi_hw_low_level_write (ACPI_GPE_REGISTER_WIDTH, 0xFF,
+ &this_register->status_address, 0);
+ if (ACPI_FAILURE (status)) {
+ goto error_exit;
+ }
+
+ this_register++;
+ }
+
+ gpe_block->register_info = gpe_register_info;
+ gpe_block->event_info = gpe_event_info;
+
+ return_ACPI_STATUS (AE_OK);
+
+
+error_exit:
+
+ if (gpe_register_info) {
+ ACPI_MEM_FREE (gpe_register_info);
+ }
+ if (gpe_event_info) {
+ ACPI_MEM_FREE (gpe_event_info);
+ }
+
+ return_ACPI_STATUS (AE_OK);
+}
+
+
+/*******************************************************************************
+ *
+ * FUNCTION: acpi_ev_create_gpe_block
+ *
+ * PARAMETERS: TBD
+ *
+ * RETURN: Status
+ *
+ * DESCRIPTION: Create and Install a block of GPE registers
+ *
+ ******************************************************************************/
+
+acpi_status
+acpi_ev_create_gpe_block (
+ char *pathname,
+ struct acpi_generic_address *gpe_block_address,
+ u32 register_count,
+ u8 gpe_block_base_number,
+ u32 interrupt_level)
+{
+ struct acpi_gpe_block_info *gpe_block;
+ acpi_status status;
+ acpi_handle obj_handle;
+
+
+ ACPI_FUNCTION_TRACE ("ev_create_gpe_block");
+
+
+ if (!register_count) {
+ return_ACPI_STATUS (AE_OK);
+ }
+
+ /* Get a handle to the parent object for this GPE block */
+
+ status = acpi_get_handle (NULL, pathname, &obj_handle);
+ if (ACPI_FAILURE (status)) {
+ return_ACPI_STATUS (status);
+ }
+
+ /* Allocate a new GPE block */
+
+ gpe_block = ACPI_MEM_CALLOCATE (sizeof (struct acpi_gpe_block_info));
+ if (!gpe_block) {
+ return_ACPI_STATUS (AE_NO_MEMORY);
+ }
+
+ /* Initialize the new GPE block */
+
+ gpe_block->register_count = register_count;
+ gpe_block->block_base_number = gpe_block_base_number;
+
+ ACPI_MEMCPY (&gpe_block->block_address, gpe_block_address, sizeof (struct acpi_generic_address));
+
+ /* Create the register_info and event_info sub-structures */
+
+ status = acpi_ev_create_gpe_info_blocks (gpe_block);
+ if (ACPI_FAILURE (status)) {
+ ACPI_MEM_FREE (gpe_block);
+ return_ACPI_STATUS (status);
+ }
+
+ /* Install the new block in the global list(s) */
+ /* TBD: Install block in the interrupt handler list */
+
+ status = acpi_ev_install_gpe_block (gpe_block);
+ if (ACPI_FAILURE (status)) {
+ ACPI_MEM_FREE (gpe_block);
+ return_ACPI_STATUS (status);
+ }
+
+ /* Dump info about this GPE block */
+
+ ACPI_DEBUG_PRINT ((ACPI_DB_INIT, "GPE Block: %X registers at %8.8X%8.8X\n",
+ gpe_block->register_count,
+ ACPI_HIDWORD (gpe_block->block_address.address),
+ ACPI_LODWORD (gpe_block->block_address.address)));
+
+ ACPI_DEBUG_PRINT ((ACPI_DB_INIT, "GPE Block defined as GPE%d to GPE%d\n",
+ gpe_block->block_base_number,
+ (u32) (gpe_block->block_base_number +
+ ((gpe_block->register_count * ACPI_GPE_REGISTER_WIDTH) -1))));
+
+ /* Find all GPE methods (_Lxx, _Exx) for this block */
+
+ status = acpi_walk_namespace (ACPI_TYPE_METHOD, obj_handle,
+ ACPI_UINT32_MAX, acpi_ev_save_method_info,
+ gpe_block, NULL);
+
+ return_ACPI_STATUS (AE_OK);
+}
+
+
+/*******************************************************************************
+ *
+ * FUNCTION: acpi_ev_gpe_initialize
+ *
+ * PARAMETERS: None
+ *
+ * RETURN: Status
+ *
+ * DESCRIPTION: Initialize the GPE data structures
+ *
+ ******************************************************************************/
+
+acpi_status
+acpi_ev_gpe_initialize (void)
+{
+ u32 register_count0 = 0;
+ u32 register_count1 = 0;
+ u32 gpe_number_max = 0;
+
+
+ ACPI_FUNCTION_TRACE ("ev_gpe_initialize");
+
+
+ /*
+ * Initialize the GPE Blocks defined in the FADT
+ *
+ * Why the GPE register block lengths are divided by 2: From the ACPI Spec,
+ * section "General-Purpose Event Registers", we have:
+ *
+ * "Each register block contains two registers of equal length
+ * GPEx_STS and GPEx_EN (where x is 0 or 1). The length of the
+ * GPE0_STS and GPE0_EN registers is equal to half the GPE0_LEN
+ * The length of the GPE1_STS and GPE1_EN registers is equal to
+ * half the GPE1_LEN. If a generic register block is not supported
+ * then its respective block pointer and block length values in the
+ * FADT table contain zeros. The GPE0_LEN and GPE1_LEN do not need
+ * to be the same size."
+ */
+
+ /*
+ * Determine the maximum GPE number for this machine.
+ *
+ * Note: both GPE0 and GPE1 are optional, and either can exist without
+ * the other.
+ * If EITHER the register length OR the block address are zero, then that
+ * particular block is not supported.
+ */
+ if (acpi_gbl_FADT->gpe0_blk_len &&
+ acpi_gbl_FADT->xgpe0_blk.address) {
+ /* GPE block 0 exists (has both length and address > 0) */
+
+ register_count0 = (u16) (acpi_gbl_FADT->gpe0_blk_len / 2);
+
+ gpe_number_max = (register_count0 * ACPI_GPE_REGISTER_WIDTH) - 1;
+
+ acpi_ev_create_gpe_block ("\\_GPE", &acpi_gbl_FADT->xgpe0_blk,
+ register_count0, 0, acpi_gbl_FADT->sci_int);
+ }
+
+ if (acpi_gbl_FADT->gpe1_blk_len &&
+ acpi_gbl_FADT->xgpe1_blk.address) {
+ /* GPE block 1 exists (has both length and address > 0) */
+
+ register_count1 = (u16) (acpi_gbl_FADT->gpe1_blk_len / 2);
+
+ /* Check for GPE0/GPE1 overlap (if both banks exist) */
+
+ if ((register_count0) &&
+ (gpe_number_max >= acpi_gbl_FADT->gpe1_base)) {
+ ACPI_REPORT_ERROR ((
+ "GPE0 block (GPE 0 to %d) overlaps the GPE1 block (GPE %d to %d) - Ignoring GPE1\n",
+ gpe_number_max, acpi_gbl_FADT->gpe1_base,
+ acpi_gbl_FADT->gpe1_base +
+ ((register_count1 * ACPI_GPE_REGISTER_WIDTH) - 1)));
+
+ /* Ignore GPE1 block by setting the register count to zero */
+
+ register_count1 = 0;
+ }
+ else {
+ acpi_ev_create_gpe_block ("\\_GPE", &acpi_gbl_FADT->xgpe1_blk,
+ register_count1, acpi_gbl_FADT->gpe1_base, acpi_gbl_FADT->sci_int);
+
+ /*
+ * GPE0 and GPE1 do not have to be contiguous in the GPE number space,
+ * But, GPE0 always starts at zero.
+ */
+ gpe_number_max = acpi_gbl_FADT->gpe1_base +
+ ((register_count1 * ACPI_GPE_REGISTER_WIDTH) - 1);
+ }
+ }
+
+ /* Exit if there are no GPE registers */
+
+ if ((register_count0 + register_count1) == 0) {
+ /* GPEs are not required by ACPI, this is OK */
+
+ ACPI_REPORT_INFO (("There are no GPE blocks defined in the FADT\n"));
+ return_ACPI_STATUS (AE_OK);
+ }
+
+ /* Check for Max GPE number out-of-range */
+
+ if (gpe_number_max > ACPI_GPE_MAX) {
+ ACPI_REPORT_ERROR (("Maximum GPE number from FADT is too large: 0x%X\n",
+ gpe_number_max));
+ return_ACPI_STATUS (AE_BAD_VALUE);
+ }
+
+ return_ACPI_STATUS (AE_OK);
+}
+
+
diff -Nru a/drivers/acpi/events/evmisc.c b/drivers/acpi/events/evmisc.c
--- a/drivers/acpi/events/evmisc.c Tue Mar 4 19:30:10 2003
+++ b/drivers/acpi/events/evmisc.c Tue Mar 4 19:30:10 2003
@@ -86,84 +86,6 @@
/*******************************************************************************
*
- * FUNCTION: acpi_ev_get_gpe_register_info
- *
- * PARAMETERS: gpe_number - Raw GPE number
- *
- * RETURN: Pointer to the info struct for this GPE register.
- *
- * DESCRIPTION: Returns the register index (index into the GPE register info
- * table) associated with this GPE.
- *
- ******************************************************************************/
-
-struct acpi_gpe_register_info *
-acpi_ev_get_gpe_register_info (
- u32 gpe_number)
-{
-
- if (gpe_number > acpi_gbl_gpe_number_max) {
- return (NULL);
- }
-
- return (&acpi_gbl_gpe_register_info [ACPI_DIV_8 (acpi_gbl_gpe_number_to_index[gpe_number].number_index)]);
-}
-
-
-/*******************************************************************************
- *
- * FUNCTION: acpi_ev_get_gpe_number_info
- *
- * PARAMETERS: gpe_number - Raw GPE number
- *
- * RETURN: None.
- *
- * DESCRIPTION: Returns the number index (index into the GPE number info table)
- * associated with this GPE.
- *
- ******************************************************************************/
-
-struct acpi_gpe_number_info *
-acpi_ev_get_gpe_number_info (
- u32 gpe_number)
-{
-
- if (gpe_number > acpi_gbl_gpe_number_max) {
- return (NULL);
- }
-
- return (&acpi_gbl_gpe_number_info [acpi_gbl_gpe_number_to_index[gpe_number].number_index]);
-}
-
-
-/*******************************************************************************
- *
- * FUNCTION: acpi_ev_get_gpe_number_index
- *
- * PARAMETERS: gpe_number - Raw GPE number
- *
- * RETURN: None.
- *
- * DESCRIPTION: Returns the number index (index into the GPE number info table)
- * associated with this GPE.
- *
- ******************************************************************************/
-
-u32
-acpi_ev_get_gpe_number_index (
- u32 gpe_number)
-{
-
- if (gpe_number > acpi_gbl_gpe_number_max) {
- return (ACPI_GPE_INVALID);
- }
-
- return (acpi_gbl_gpe_number_to_index[gpe_number].number_index);
-}
-
-
-/*******************************************************************************
- *
* FUNCTION: acpi_ev_queue_notify_request
*
* PARAMETERS:
@@ -601,6 +523,9 @@
{
acpi_native_uint i;
acpi_status status;
+ struct acpi_gpe_block_info *gpe_block;
+ struct acpi_gpe_block_info *next_gpe_block;
+ struct acpi_gpe_event_info *gpe_event_info;
ACPI_FUNCTION_TRACE ("ev_terminate");
@@ -625,13 +550,19 @@
/*
* Disable all GPEs
*/
- for (i = 0; i < acpi_gbl_gpe_number_max; i++) {
- if (acpi_ev_get_gpe_number_index ((u32)i) != ACPI_GPE_INVALID) {
- status = acpi_hw_disable_gpe((u32) i);
+ gpe_block = acpi_gbl_gpe_block_list_head;
+ while (gpe_block) {
+ gpe_event_info = gpe_block->event_info;
+ for (i = 0; i < (gpe_block->register_count * 8); i++) {
+ status = acpi_hw_disable_gpe (gpe_event_info);
if (ACPI_FAILURE (status)) {
ACPI_DEBUG_PRINT ((ACPI_DB_ERROR, "Could not disable GPE %d\n", (u32) i));
}
+
+ gpe_event_info++;
}
+
+ gpe_block = gpe_block->next;
}
/*
@@ -654,21 +585,16 @@
}
/*
- * Free global tables, etc.
+ * Free global GPE blocks and related info structures
*/
- if (acpi_gbl_gpe_register_info) {
- ACPI_MEM_FREE (acpi_gbl_gpe_register_info);
- acpi_gbl_gpe_register_info = NULL;
- }
-
- if (acpi_gbl_gpe_number_info) {
- ACPI_MEM_FREE (acpi_gbl_gpe_number_info);
- acpi_gbl_gpe_number_info = NULL;
- }
+ gpe_block = acpi_gbl_gpe_block_list_head;
+ while (gpe_block) {
+ next_gpe_block = gpe_block->next;
+ ACPI_MEM_FREE (gpe_block->event_info);
+ ACPI_MEM_FREE (gpe_block->register_info);
+ ACPI_MEM_FREE (gpe_block);
- if (acpi_gbl_gpe_number_to_index) {
- ACPI_MEM_FREE (acpi_gbl_gpe_number_to_index);
- acpi_gbl_gpe_number_to_index = NULL;
+ gpe_block = next_gpe_block;
}
return_VOID;
diff -Nru a/drivers/acpi/events/evrgnini.c b/drivers/acpi/events/evrgnini.c
--- a/drivers/acpi/events/evrgnini.c Tue Mar 4 19:30:04 2003
+++ b/drivers/acpi/events/evrgnini.c Tue Mar 4 19:30:04 2003
@@ -410,7 +410,7 @@
* Get the appropriate address space handler for a newly
* created region.
*
- * This also performs address space specific intialization. For
+ * This also performs address space specific initialization. For
* example, PCI regions must have an _ADR object that contains
* a PCI address in the scope of the definition. This address is
* required to perform an access to PCI config space.
diff -Nru a/drivers/acpi/events/evsci.c b/drivers/acpi/events/evsci.c
--- a/drivers/acpi/events/evsci.c Tue Mar 4 19:30:11 2003
+++ b/drivers/acpi/events/evsci.c Tue Mar 4 19:30:11 2003
@@ -69,38 +69,24 @@
void *context)
{
u32 interrupt_handled = ACPI_INTERRUPT_NOT_HANDLED;
- u32 value;
- acpi_status status;
ACPI_FUNCTION_TRACE("ev_sci_handler");
/*
- * Make sure that ACPI is enabled by checking SCI_EN. Note that we are
- * required to treat the SCI interrupt as sharable, level, active low.
+ * We are guaranteed by the ACPI CA initialization/shutdown code that
+ * if this interrupt handler is installed, ACPI is enabled.
*/
- status = acpi_get_register (ACPI_BITREG_SCI_ENABLE, &value, ACPI_MTX_DO_NOT_LOCK);
- if (ACPI_FAILURE (status)) {
- return (ACPI_INTERRUPT_NOT_HANDLED);
- }
-
- if (!value) {
- /* ACPI is not enabled; this interrupt cannot be for us */
-
- return_VALUE (ACPI_INTERRUPT_NOT_HANDLED);
- }
/*
* Fixed acpi_events:
- * -------------
* Check for and dispatch any Fixed acpi_events that have occurred
*/
interrupt_handled |= acpi_ev_fixed_event_detect ();
/*
* GPEs:
- * -----
* Check for and dispatch any GPEs that have occurred
*/
interrupt_handled |= acpi_ev_gpe_detect ();
diff -Nru a/drivers/acpi/events/evxface.c b/drivers/acpi/events/evxface.c
--- a/drivers/acpi/events/evxface.c Tue Mar 4 19:30:10 2003
+++ b/drivers/acpi/events/evxface.c Tue Mar 4 19:30:10 2003
@@ -492,7 +492,7 @@
void *context)
{
acpi_status status;
- struct acpi_gpe_number_info *gpe_number_info;
+ struct acpi_gpe_event_info *gpe_event_info;
ACPI_FUNCTION_TRACE ("acpi_install_gpe_handler");
@@ -506,8 +506,8 @@
/* Ensure that we have a valid GPE number */
- gpe_number_info = acpi_ev_get_gpe_number_info (gpe_number);
- if (!gpe_number_info) {
+ gpe_event_info = acpi_ev_get_gpe_event_info (gpe_number);
+ if (!gpe_event_info) {
return_ACPI_STATUS (AE_BAD_PARAMETER);
}
@@ -518,25 +518,25 @@
/* Make sure that there isn't a handler there already */
- if (gpe_number_info->handler) {
+ if (gpe_event_info->handler) {
status = AE_ALREADY_EXISTS;
goto cleanup;
}
/* Install the handler */
- gpe_number_info->handler = handler;
- gpe_number_info->context = context;
- gpe_number_info->type = (u8) type;
+ gpe_event_info->handler = handler;
+ gpe_event_info->context = context;
+ gpe_event_info->type = (u8) type;
/* Clear the GPE (of stale events), the enable it */
- status = acpi_hw_clear_gpe (gpe_number);
+ status = acpi_hw_clear_gpe (gpe_event_info);
if (ACPI_FAILURE (status)) {
goto cleanup;
}
- status = acpi_hw_enable_gpe (gpe_number);
+ status = acpi_hw_enable_gpe (gpe_event_info);
cleanup:
@@ -564,7 +564,7 @@
acpi_gpe_handler handler)
{
acpi_status status;
- struct acpi_gpe_number_info *gpe_number_info;
+ struct acpi_gpe_event_info *gpe_event_info;
ACPI_FUNCTION_TRACE ("acpi_remove_gpe_handler");
@@ -578,14 +578,14 @@
/* Ensure that we have a valid GPE number */
- gpe_number_info = acpi_ev_get_gpe_number_info (gpe_number);
- if (!gpe_number_info) {
+ gpe_event_info = acpi_ev_get_gpe_event_info (gpe_number);
+ if (!gpe_event_info) {
return_ACPI_STATUS (AE_BAD_PARAMETER);
}
/* Disable the GPE before removing the handler */
- status = acpi_hw_disable_gpe (gpe_number);
+ status = acpi_hw_disable_gpe (gpe_event_info);
if (ACPI_FAILURE (status)) {
return_ACPI_STATUS (status);
}
@@ -597,16 +597,16 @@
/* Make sure that the installed handler is the same */
- if (gpe_number_info->handler != handler) {
- (void) acpi_hw_enable_gpe (gpe_number);
+ if (gpe_event_info->handler != handler) {
+ (void) acpi_hw_enable_gpe (gpe_event_info);
status = AE_BAD_PARAMETER;
goto cleanup;
}
/* Remove the handler */
- gpe_number_info->handler = NULL;
- gpe_number_info->context = NULL;
+ gpe_event_info->handler = NULL;
+ gpe_event_info->context = NULL;
cleanup:
diff -Nru a/drivers/acpi/events/evxfevnt.c b/drivers/acpi/events/evxfevnt.c
--- a/drivers/acpi/events/evxfevnt.c Tue Mar 4 19:30:05 2003
+++ b/drivers/acpi/events/evxfevnt.c Tue Mar 4 19:30:05 2003
@@ -163,6 +163,7 @@
{
acpi_status status = AE_OK;
u32 value;
+ struct acpi_gpe_event_info *gpe_event_info;
ACPI_FUNCTION_TRACE ("acpi_enable_event");
@@ -209,19 +210,20 @@
/* Ensure that we have a valid GPE number */
- if (acpi_ev_get_gpe_number_index (event) == ACPI_GPE_INVALID) {
+ gpe_event_info = acpi_ev_get_gpe_event_info (event);
+ if (!gpe_event_info) {
return_ACPI_STATUS (AE_BAD_PARAMETER);
}
/* Enable the requested GPE number */
- status = acpi_hw_enable_gpe (event);
+ status = acpi_hw_enable_gpe (gpe_event_info);
if (ACPI_FAILURE (status)) {
return_ACPI_STATUS (status);
}
if (flags & ACPI_EVENT_WAKE_ENABLE) {
- acpi_hw_enable_gpe_for_wakeup (event);
+ acpi_hw_enable_gpe_for_wakeup (gpe_event_info);
}
break;
@@ -257,6 +259,7 @@
{
acpi_status status = AE_OK;
u32 value;
+ struct acpi_gpe_event_info *gpe_event_info;
ACPI_FUNCTION_TRACE ("acpi_disable_event");
@@ -301,7 +304,8 @@
/* Ensure that we have a valid GPE number */
- if (acpi_ev_get_gpe_number_index (event) == ACPI_GPE_INVALID) {
+ gpe_event_info = acpi_ev_get_gpe_event_info (event);
+ if (!gpe_event_info) {
return_ACPI_STATUS (AE_BAD_PARAMETER);
}
@@ -311,10 +315,10 @@
*/
if (flags & ACPI_EVENT_WAKE_DISABLE) {
- acpi_hw_disable_gpe_for_wakeup (event);
+ acpi_hw_disable_gpe_for_wakeup (gpe_event_info);
}
else {
- status = acpi_hw_disable_gpe (event);
+ status = acpi_hw_disable_gpe (gpe_event_info);
}
break;
@@ -346,6 +350,7 @@
u32 type)
{
acpi_status status = AE_OK;
+ struct acpi_gpe_event_info *gpe_event_info;
ACPI_FUNCTION_TRACE ("acpi_clear_event");
@@ -375,11 +380,12 @@
/* Ensure that we have a valid GPE number */
- if (acpi_ev_get_gpe_number_index (event) == ACPI_GPE_INVALID) {
+ gpe_event_info = acpi_ev_get_gpe_event_info (event);
+ if (!gpe_event_info) {
return_ACPI_STATUS (AE_BAD_PARAMETER);
}
- status = acpi_hw_clear_gpe (event);
+ status = acpi_hw_clear_gpe (gpe_event_info);
break;
@@ -415,6 +421,7 @@
acpi_event_status *event_status)
{
acpi_status status = AE_OK;
+ struct acpi_gpe_event_info *gpe_event_info;
ACPI_FUNCTION_TRACE ("acpi_get_event_status");
@@ -447,7 +454,8 @@
/* Ensure that we have a valid GPE number */
- if (acpi_ev_get_gpe_number_index (event) == ACPI_GPE_INVALID) {
+ gpe_event_info = acpi_ev_get_gpe_event_info (event);
+ if (!gpe_event_info) {
return_ACPI_STATUS (AE_BAD_PARAMETER);
}
@@ -463,4 +471,5 @@
return_ACPI_STATUS (status);
}
+
diff -Nru a/drivers/acpi/hardware/hwgpe.c b/drivers/acpi/hardware/hwgpe.c
--- a/drivers/acpi/hardware/hwgpe.c Tue Mar 4 19:30:04 2003
+++ b/drivers/acpi/hardware/hwgpe.c Tue Mar 4 19:30:04 2003
@@ -51,26 +51,6 @@
/******************************************************************************
*
- * FUNCTION: acpi_hw_get_gpe_bit_mask
- *
- * PARAMETERS: gpe_number - The GPE
- *
- * RETURN: Gpe register bitmask for this gpe level
- *
- * DESCRIPTION: Get the bitmask for this GPE
- *
- ******************************************************************************/
-
-u8
-acpi_hw_get_gpe_bit_mask (
- u32 gpe_number)
-{
- return (acpi_gbl_gpe_number_info [acpi_ev_get_gpe_number_index (gpe_number)].bit_mask);
-}
-
-
-/******************************************************************************
- *
* FUNCTION: acpi_hw_enable_gpe
*
* PARAMETERS: gpe_number - The GPE
@@ -83,37 +63,29 @@
acpi_status
acpi_hw_enable_gpe (
- u32 gpe_number)
+ struct acpi_gpe_event_info *gpe_event_info)
{
u32 in_byte;
acpi_status status;
- struct acpi_gpe_register_info *gpe_register_info;
ACPI_FUNCTION_ENTRY ();
- /* Get the info block for the entire GPE register */
-
- gpe_register_info = acpi_ev_get_gpe_register_info (gpe_number);
- if (!gpe_register_info) {
- return (AE_BAD_PARAMETER);
- }
-
/*
* Read the current value of the register, set the appropriate bit
* to enable the GPE, and write out the new register.
*/
status = acpi_hw_low_level_read (8, &in_byte,
- &gpe_register_info->enable_address, 0);
+ &gpe_event_info->register_info->enable_address, 0);
if (ACPI_FAILURE (status)) {
return (status);
}
/* Write with the new GPE bit enabled */
- status = acpi_hw_low_level_write (8, (in_byte | acpi_hw_get_gpe_bit_mask (gpe_number)),
- &gpe_register_info->enable_address, 0);
+ status = acpi_hw_low_level_write (8, (in_byte | gpe_event_info->bit_mask),
+ &gpe_event_info->register_info->enable_address, 0);
return (status);
}
@@ -134,7 +106,7 @@
void
acpi_hw_enable_gpe_for_wakeup (
- u32 gpe_number)
+ struct acpi_gpe_event_info *gpe_event_info)
{
struct acpi_gpe_register_info *gpe_register_info;
@@ -144,7 +116,7 @@
/* Get the info block for the entire GPE register */
- gpe_register_info = acpi_ev_get_gpe_register_info (gpe_number);
+ gpe_register_info = gpe_event_info->register_info;
if (!gpe_register_info) {
return;
}
@@ -152,7 +124,7 @@
/*
* Set the bit so we will not disable this when sleeping
*/
- gpe_register_info->wake_enable |= acpi_hw_get_gpe_bit_mask (gpe_number);
+ gpe_register_info->wake_enable |= gpe_event_info->bit_mask;
}
@@ -170,7 +142,7 @@
acpi_status
acpi_hw_disable_gpe (
- u32 gpe_number)
+ struct acpi_gpe_event_info *gpe_event_info)
{
u32 in_byte;
acpi_status status;
@@ -182,7 +154,7 @@
/* Get the info block for the entire GPE register */
- gpe_register_info = acpi_ev_get_gpe_register_info (gpe_number);
+ gpe_register_info = gpe_event_info->register_info;
if (!gpe_register_info) {
return (AE_BAD_PARAMETER);
}
@@ -199,13 +171,13 @@
/* Write the byte with this GPE bit cleared */
- status = acpi_hw_low_level_write (8, (in_byte & ~(acpi_hw_get_gpe_bit_mask (gpe_number))),
+ status = acpi_hw_low_level_write (8, (in_byte & ~(gpe_event_info->bit_mask)),
&gpe_register_info->enable_address, 0);
if (ACPI_FAILURE (status)) {
return (status);
}
- acpi_hw_disable_gpe_for_wakeup(gpe_number);
+ acpi_hw_disable_gpe_for_wakeup (gpe_event_info);
return (AE_OK);
}
@@ -225,7 +197,7 @@
void
acpi_hw_disable_gpe_for_wakeup (
- u32 gpe_number)
+ struct acpi_gpe_event_info *gpe_event_info)
{
struct acpi_gpe_register_info *gpe_register_info;
@@ -235,7 +207,7 @@
/* Get the info block for the entire GPE register */
- gpe_register_info = acpi_ev_get_gpe_register_info (gpe_number);
+ gpe_register_info = gpe_event_info->register_info;
if (!gpe_register_info) {
return;
}
@@ -243,7 +215,7 @@
/*
* Clear the bit so we will disable this when sleeping
*/
- gpe_register_info->wake_enable &= ~(acpi_hw_get_gpe_bit_mask (gpe_number));
+ gpe_register_info->wake_enable &= ~(gpe_event_info->bit_mask);
}
@@ -261,28 +233,20 @@
acpi_status
acpi_hw_clear_gpe (
- u32 gpe_number)
+ struct acpi_gpe_event_info *gpe_event_info)
{
acpi_status status;
- struct acpi_gpe_register_info *gpe_register_info;
ACPI_FUNCTION_ENTRY ();
- /* Get the info block for the entire GPE register */
-
- gpe_register_info = acpi_ev_get_gpe_register_info (gpe_number);
- if (!gpe_register_info) {
- return (AE_BAD_PARAMETER);
- }
-
/*
* Write a one to the appropriate bit in the status register to
* clear this GPE.
*/
- status = acpi_hw_low_level_write (8, acpi_hw_get_gpe_bit_mask (gpe_number),
- &gpe_register_info->status_address, 0);
+ status = acpi_hw_low_level_write (8, gpe_event_info->bit_mask,
+ &gpe_event_info->register_info->status_address, 0);
return (status);
}
@@ -308,6 +272,7 @@
u32 in_byte;
u8 bit_mask;
struct acpi_gpe_register_info *gpe_register_info;
+ struct acpi_gpe_event_info *gpe_event_info;
acpi_status status;
acpi_event_status local_event_status = 0;
@@ -319,16 +284,18 @@
return (AE_BAD_PARAMETER);
}
- /* Get the info block for the entire GPE register */
-
- gpe_register_info = acpi_ev_get_gpe_register_info (gpe_number);
- if (!gpe_register_info) {
+ gpe_event_info = acpi_ev_get_gpe_event_info (gpe_number);
+ if (!gpe_event_info) {
return (AE_BAD_PARAMETER);
}
+ /* Get the info block for the entire GPE register */
+
+ gpe_register_info = gpe_event_info->register_info;
+
/* Get the register bitmask for this GPE */
- bit_mask = acpi_hw_get_gpe_bit_mask (gpe_number);
+ bit_mask = gpe_event_info->bit_mask;
/* GPE Enabled? */
@@ -375,7 +342,7 @@
*
* DESCRIPTION: Disable all non-wakeup GPEs
* Call with interrupts disabled. The interrupt handler also
- * modifies acpi_gbl_gpe_register_info[i].Enable, so it should not be
+ * modifies gpe_register_info->Enable, so it should not be
* given the chance to run until after non-wake GPEs are
* re-enabled.
*
@@ -389,40 +356,49 @@
struct acpi_gpe_register_info *gpe_register_info;
u32 in_value;
acpi_status status;
+ struct acpi_gpe_block_info *gpe_block;
ACPI_FUNCTION_ENTRY ();
- for (i = 0; i < acpi_gbl_gpe_register_count; i++) {
- /* Get the info block for the entire GPE register */
+ gpe_block = acpi_gbl_gpe_block_list_head;
+ while (gpe_block) {
+ /* Get the register info for the entire GPE block */
- gpe_register_info = &acpi_gbl_gpe_register_info[i];
+ gpe_register_info = gpe_block->register_info;
if (!gpe_register_info) {
return (AE_BAD_PARAMETER);
}
- /*
- * Read the enabled status of all GPEs. We
- * will be using it to restore all the GPEs later.
- */
- status = acpi_hw_low_level_read (8, &in_value,
- &gpe_register_info->enable_address, 0);
- if (ACPI_FAILURE (status)) {
- return (status);
- }
-
- gpe_register_info->enable = (u8) in_value;
+ for (i = 0; i < gpe_block->register_count; i++) {
+ /*
+ * Read the enabled status of all GPEs. We
+ * will be using it to restore all the GPEs later.
+ */
+ status = acpi_hw_low_level_read (8, &in_value,
+ &gpe_register_info->enable_address, 0);
+ if (ACPI_FAILURE (status)) {
+ return (status);
+ }
+
+ gpe_register_info->enable = (u8) in_value;
+
+ /*
+ * Disable all GPEs except wakeup GPEs.
+ */
+ status = acpi_hw_low_level_write (8, gpe_register_info->wake_enable,
+ &gpe_register_info->enable_address, 0);
+ if (ACPI_FAILURE (status)) {
+ return (status);
+ }
- /*
- * Disable all GPEs except wakeup GPEs.
- */
- status = acpi_hw_low_level_write (8, gpe_register_info->wake_enable,
- &gpe_register_info->enable_address, 0);
- if (ACPI_FAILURE (status)) {
- return (status);
+ gpe_register_info++;
}
+
+ gpe_block = gpe_block->next;
}
+
return (AE_OK);
}
@@ -446,28 +422,37 @@
u32 i;
struct acpi_gpe_register_info *gpe_register_info;
acpi_status status;
+ struct acpi_gpe_block_info *gpe_block;
ACPI_FUNCTION_ENTRY ();
- for (i = 0; i < acpi_gbl_gpe_register_count; i++) {
- /* Get the info block for the entire GPE register */
+ gpe_block = acpi_gbl_gpe_block_list_head;
+ while (gpe_block) {
+ /* Get the register info for the entire GPE block */
- gpe_register_info = &acpi_gbl_gpe_register_info[i];
+ gpe_register_info = gpe_block->register_info;
if (!gpe_register_info) {
return (AE_BAD_PARAMETER);
}
- /*
- * We previously stored the enabled status of all GPEs.
- * Blast them back in.
- */
- status = acpi_hw_low_level_write (8, gpe_register_info->enable,
- &gpe_register_info->enable_address, 0);
- if (ACPI_FAILURE (status)) {
- return (status);
+ for (i = 0; i < gpe_block->register_count; i++) {
+ /*
+ * We previously stored the enabled status of all GPEs.
+ * Blast them back in.
+ */
+ status = acpi_hw_low_level_write (8, gpe_register_info->enable,
+ &gpe_register_info->enable_address, 0);
+ if (ACPI_FAILURE (status)) {
+ return (status);
+ }
+
+ gpe_register_info++;
}
+
+ gpe_block = gpe_block->next;
}
+
return (AE_OK);
}
diff -Nru a/drivers/acpi/hardware/hwregs.c b/drivers/acpi/hardware/hwregs.c
--- a/drivers/acpi/hardware/hwregs.c Tue Mar 4 19:30:13 2003
+++ b/drivers/acpi/hardware/hwregs.c Tue Mar 4 19:30:13 2003
@@ -67,8 +67,8 @@
acpi_hw_clear_acpi_status (void)
{
acpi_native_uint i;
- acpi_native_uint gpe_block;
acpi_status status;
+ struct acpi_gpe_block_info *gpe_block;
ACPI_FUNCTION_TRACE ("hw_clear_acpi_status");
@@ -100,16 +100,19 @@
}
}
- /* Clear the GPE Bits */
+ /* Clear the GPE Bits in all GPE registers in all GPE blocks */
- for (gpe_block = 0; gpe_block < ACPI_MAX_GPE_BLOCKS; gpe_block++) {
- for (i = 0; i < acpi_gbl_gpe_block_info[gpe_block].register_count; i++) {
+ gpe_block = acpi_gbl_gpe_block_list_head;
+ while (gpe_block) {
+ for (i = 0; i < gpe_block->register_count; i++) {
status = acpi_hw_low_level_write (8, 0xFF,
- acpi_gbl_gpe_block_info[gpe_block].block_address, (u32) i);
+ &gpe_block->register_info[i].status_address, (u32) i);
if (ACPI_FAILURE (status)) {
goto unlock_and_exit;
}
}
+
+ gpe_block = gpe_block->next;
}
unlock_and_exit:
@@ -370,7 +373,7 @@
/*
* Status Registers are different from the rest. Clear by
- * writing 1, writing 0 has no effect. So, the only relevent
+ * writing 1, writing 0 has no effect. So, the only relevant
* information is the single bit we're interested in, all others should
* be written as 0 so they will be left unchanged
*/
diff -Nru a/drivers/acpi/hardware/hwsleep.c b/drivers/acpi/hardware/hwsleep.c
--- a/drivers/acpi/hardware/hwsleep.c Tue Mar 4 19:30:04 2003
+++ b/drivers/acpi/hardware/hwsleep.c Tue Mar 4 19:30:04 2003
@@ -250,7 +250,7 @@
/* Get current value of PM1A control */
- status = acpi_hw_register_read (ACPI_MTX_LOCK, ACPI_REGISTER_PM1_CONTROL, &PM1Acontrol);
+ status = acpi_hw_register_read (ACPI_MTX_DO_NOT_LOCK, ACPI_REGISTER_PM1_CONTROL, &PM1Acontrol);
if (ACPI_FAILURE (status)) {
return_ACPI_STATUS (status);
}
@@ -268,12 +268,12 @@
/* Write #1: fill in SLP_TYP data */
- status = acpi_hw_register_write (ACPI_MTX_LOCK, ACPI_REGISTER_PM1A_CONTROL, PM1Acontrol);
+ status = acpi_hw_register_write (ACPI_MTX_DO_NOT_LOCK, ACPI_REGISTER_PM1A_CONTROL, PM1Acontrol);
if (ACPI_FAILURE (status)) {
return_ACPI_STATUS (status);
}
- status = acpi_hw_register_write (ACPI_MTX_LOCK, ACPI_REGISTER_PM1B_CONTROL, PM1Bcontrol);
+ status = acpi_hw_register_write (ACPI_MTX_DO_NOT_LOCK, ACPI_REGISTER_PM1B_CONTROL, PM1Bcontrol);
if (ACPI_FAILURE (status)) {
return_ACPI_STATUS (status);
}
@@ -287,12 +287,12 @@
ACPI_FLUSH_CPU_CACHE ();
- status = acpi_hw_register_write (ACPI_MTX_LOCK, ACPI_REGISTER_PM1A_CONTROL, PM1Acontrol);
+ status = acpi_hw_register_write (ACPI_MTX_DO_NOT_LOCK, ACPI_REGISTER_PM1A_CONTROL, PM1Acontrol);
if (ACPI_FAILURE (status)) {
return_ACPI_STATUS (status);
}
- status = acpi_hw_register_write (ACPI_MTX_LOCK, ACPI_REGISTER_PM1B_CONTROL, PM1Bcontrol);
+ status = acpi_hw_register_write (ACPI_MTX_DO_NOT_LOCK, ACPI_REGISTER_PM1B_CONTROL, PM1Bcontrol);
if (ACPI_FAILURE (status)) {
return_ACPI_STATUS (status);
}
@@ -308,7 +308,7 @@
*/
acpi_os_stall (10000000);
- status = acpi_hw_register_write (ACPI_MTX_LOCK, ACPI_REGISTER_PM1_CONTROL,
+ status = acpi_hw_register_write (ACPI_MTX_DO_NOT_LOCK, ACPI_REGISTER_PM1_CONTROL,
sleep_enable_reg_info->access_bit_mask);
if (ACPI_FAILURE (status)) {
return_ACPI_STATUS (status);
@@ -318,7 +318,7 @@
/* Wait until we enter sleep state */
do {
- status = acpi_get_register (ACPI_BITREG_WAKE_STATUS, &in_value, ACPI_MTX_LOCK);
+ status = acpi_get_register (ACPI_BITREG_WAKE_STATUS, &in_value, ACPI_MTX_DO_NOT_LOCK);
if (ACPI_FAILURE (status)) {
return_ACPI_STATUS (status);
}
@@ -327,13 +327,58 @@
} while (!in_value);
- status = acpi_set_register (ACPI_BITREG_ARB_DISABLE, 0, ACPI_MTX_LOCK);
+ status = acpi_set_register (ACPI_BITREG_ARB_DISABLE, 0, ACPI_MTX_DO_NOT_LOCK);
if (ACPI_FAILURE (status)) {
return_ACPI_STATUS (status);
}
return_ACPI_STATUS (AE_OK);
}
+
+
+/******************************************************************************
+ *
+ * FUNCTION: acpi_enter_sleep_state_s4bios
+ *
+ * PARAMETERS: None
+ *
+ * RETURN: Status
+ *
+ * DESCRIPTION: Perform a S4 bios request.
+ * THIS FUNCTION MUST BE CALLED WITH INTERRUPTS DISABLED
+ *
+ ******************************************************************************/
+
+acpi_status
+acpi_enter_sleep_state_s4bios (
+ void)
+{
+ u32 in_value;
+ acpi_status status;
+
+
+ ACPI_FUNCTION_TRACE ("acpi_enter_sleep_state_s4bios");
+
+ acpi_set_register (ACPI_BITREG_WAKE_STATUS, 1, ACPI_MTX_DO_NOT_LOCK);
+ acpi_hw_clear_acpi_status();
+
+ acpi_hw_disable_non_wakeup_gpes();
+
+ ACPI_FLUSH_CPU_CACHE();
+
+ status = acpi_os_write_port (acpi_gbl_FADT->smi_cmd, (acpi_integer) acpi_gbl_FADT->S4bios_req, 8);
+
+ do {
+ acpi_os_stall(1000);
+ status = acpi_get_register (ACPI_BITREG_WAKE_STATUS, &in_value, ACPI_MTX_DO_NOT_LOCK);
+ if (ACPI_FAILURE (status)) {
+ return_ACPI_STATUS (status);
+ }
+ } while (!in_value);
+
+ return_ACPI_STATUS (AE_OK);
+}
+
/******************************************************************************
*
diff -Nru a/drivers/acpi/hardware/hwtimer.c b/drivers/acpi/hardware/hwtimer.c
--- a/drivers/acpi/hardware/hwtimer.c Tue Mar 4 19:30:04 2003
+++ b/drivers/acpi/hardware/hwtimer.c Tue Mar 4 19:30:04 2003
@@ -133,7 +133,7 @@
* transitions (unlike many CPU timestamp counters) -- making it
* a versatile and accurate timer.
*
- * Note that this function accomodates only a single timer
+ * Note that this function accommodates only a single timer
* rollover. Thus for 24-bit timers, this function should only
* be used for calculating durations less than ~4.6 seconds
* (~20 minutes for 32-bit timers) -- calculations below
diff -Nru a/drivers/acpi/osl.c b/drivers/acpi/osl.c
--- a/drivers/acpi/osl.c Tue Mar 4 19:30:12 2003
+++ b/drivers/acpi/osl.c Tue Mar 4 19:30:12 2003
@@ -514,10 +514,12 @@
/* TODO: Change code to take advantage of driver model more */
void
-acpi_os_derive_pci_id (
+acpi_os_derive_pci_id_2 (
acpi_handle rhandle, /* upper bound */
acpi_handle chandle, /* current node */
- struct acpi_pci_id **id)
+ struct acpi_pci_id **id,
+ int *is_bridge,
+ u8 *bus_number)
{
acpi_handle handle;
struct acpi_pci_id *pci_id = *id;
@@ -528,7 +530,7 @@
acpi_get_parent(chandle, &handle);
if (handle != rhandle) {
- acpi_os_derive_pci_id(rhandle, handle, &pci_id);
+ acpi_os_derive_pci_id_2(rhandle, handle, &pci_id, is_bridge, bus_number);
status = acpi_get_type(handle, &type);
if ( (ACPI_FAILURE(status)) || (type != ACPI_TYPE_DEVICE) )
@@ -539,15 +541,40 @@
pci_id->device = ACPI_HIWORD (ACPI_LODWORD (temp));
pci_id->function = ACPI_LOWORD (ACPI_LODWORD (temp));
+ if (*is_bridge)
+ pci_id->bus = *bus_number;
+
/* any nicer way to get bus number of bridge ? */
status = acpi_os_read_pci_configuration(pci_id, 0x0e, &tu8, 8);
- if (ACPI_SUCCESS(status) && (tu8 & 0x7f) == 1) {
+ if (ACPI_SUCCESS(status) &&
+ ((tu8 & 0x7f) == 1 || (tu8 & 0x7f) == 2)) {
+ status = acpi_os_read_pci_configuration(pci_id, 0x18, &tu8, 8);
+ if (!ACPI_SUCCESS(status)) {
+ /* Certainly broken... FIX ME */
+ return;
+ }
+ *is_bridge = 1;
+ pci_id->bus = tu8;
status = acpi_os_read_pci_configuration(pci_id, 0x19, &tu8, 8);
- if (ACPI_SUCCESS(status))
- pci_id->bus = tu8;
- }
+ if (ACPI_SUCCESS(status)) {
+ *bus_number = tu8;
+ }
+ } else
+ *is_bridge = 0;
}
}
+}
+
+void
+acpi_os_derive_pci_id (
+ acpi_handle rhandle, /* upper bound */
+ acpi_handle chandle, /* current node */
+ struct acpi_pci_id **id)
+{
+ int is_bridge = 1;
+ u8 bus_number = (*id)->bus;
+
+ acpi_os_derive_pci_id_2(rhandle, chandle, id, &is_bridge, &bus_number);
}
#else /*!CONFIG_ACPI_PCI*/
diff -Nru a/drivers/acpi/pci_link.c b/drivers/acpi/pci_link.c
--- a/drivers/acpi/pci_link.c Tue Mar 4 19:30:04 2003
+++ b/drivers/acpi/pci_link.c Tue Mar 4 19:30:04 2003
@@ -90,42 +90,25 @@
PCI Link Device Management
-------------------------------------------------------------------------- */
-static int
-acpi_pci_link_get_possible (
- struct acpi_pci_link *link)
+static acpi_status
+acpi_pci_link_check_possible (
+ struct acpi_resource *resource,
+ void *context)
{
- int result = 0;
- acpi_status status = AE_OK;
- struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL};
- struct acpi_resource *resource = NULL;
+ struct acpi_pci_link *link = (struct acpi_pci_link *) context;
int i = 0;
- ACPI_FUNCTION_TRACE("acpi_pci_link_get_possible");
-
- if (!link)
- return_VALUE(-EINVAL);
-
- status = acpi_get_possible_resources(link->handle, &buffer);
- if (ACPI_FAILURE(status) || !buffer.pointer) {
- ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Error evaluating _PRS\n"));
- result = -ENODEV;
- goto end;
- }
-
- resource = (struct acpi_resource *) buffer.pointer;
-
- /* skip past dependent function resource (if present) */
- if (resource->id == ACPI_RSTYPE_START_DPF)
- resource = ACPI_NEXT_RESOURCE(resource);
+ ACPI_FUNCTION_TRACE("acpi_pci_link_check_possible");
switch (resource->id) {
+ case ACPI_RSTYPE_START_DPF:
+ return AE_OK;
case ACPI_RSTYPE_IRQ:
{
struct acpi_resource_irq *p = &resource->data.irq;
if (!p || !p->number_of_interrupts) {
ACPI_DEBUG_PRINT((ACPI_DB_WARN, "Blank IRQ resource\n"));
- result = -ENODEV;
- goto end;
+ return AE_OK;
}
for (i = 0; (inumber_of_interrupts && iinterrupts[i]) {
@@ -143,8 +126,7 @@
if (!p || !p->number_of_interrupts) {
ACPI_DEBUG_PRINT((ACPI_DB_WARN,
"Blank IRQ resource\n"));
- result = -ENODEV;
- goto end;
+ return AE_OK;
}
for (i = 0; (inumber_of_interrupts && iinterrupts[i]) {
@@ -159,18 +141,76 @@
default:
ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
"Resource is not an IRQ entry\n"));
- result = -ENODEV;
- goto end;
- break;
+ return AE_OK;
+ }
+
+ return AE_CTRL_TERMINATE;
+}
+
+
+static int
+acpi_pci_link_get_possible (
+ struct acpi_pci_link *link)
+{
+ acpi_status status;
+
+ ACPI_FUNCTION_TRACE("acpi_pci_link_get_possible");
+
+ if (!link)
+ return_VALUE(-EINVAL);
+
+ status = acpi_walk_resources(link->handle, METHOD_NAME__PRS,
+ acpi_pci_link_check_possible, link);
+ if (ACPI_FAILURE(status)) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Error evaluating _PRS\n"));
+ return_VALUE(-ENODEV);
}
-
+
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"Found %d possible IRQs\n", link->irq.possible_count));
-end:
- acpi_os_free(buffer.pointer);
+ return_VALUE(0);
+}
+
- return_VALUE(result);
+static acpi_status
+acpi_pci_link_check_current (
+ struct acpi_resource *resource,
+ void *context)
+{
+ int *irq = (int *) context;
+
+ ACPI_FUNCTION_TRACE("acpi_pci_link_check_current");
+
+ switch (resource->id) {
+ case ACPI_RSTYPE_IRQ:
+ {
+ struct acpi_resource_irq *p = &resource->data.irq;
+ if (!p || !p->number_of_interrupts) {
+ ACPI_DEBUG_PRINT((ACPI_DB_WARN,
+ "Blank IRQ resource\n"));
+ return AE_OK;
+ }
+ *irq = p->interrupts[0];
+ break;
+ }
+ case ACPI_RSTYPE_EXT_IRQ:
+ {
+ struct acpi_resource_ext_irq *p = &resource->data.extended_irq;
+ if (!p || !p->number_of_interrupts) {
+ ACPI_DEBUG_PRINT((ACPI_DB_WARN,
+ "Blank IRQ resource\n"));
+ return AE_OK;
+ }
+ *irq = p->interrupts[0];
+ break;
+ }
+ default:
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
+ "Resource isn't an IRQ\n"));
+ return AE_OK;
+ }
+ return AE_CTRL_TERMINATE;
}
@@ -180,8 +220,6 @@
{
int result = 0;
acpi_status status = AE_OK;
- struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL};
- struct acpi_resource *resource = NULL;
int irq = 0;
ACPI_FUNCTION_TRACE("acpi_pci_link_get_current");
@@ -206,47 +244,16 @@
* Query and parse _CRS to get the current IRQ assignment.
*/
- status = acpi_get_current_resources(link->handle, &buffer);
+ status = acpi_walk_resources(link->handle, METHOD_NAME__CRS,
+ acpi_pci_link_check_current, &irq);
if (ACPI_FAILURE(status)) {
ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Error evaluating _CRS\n"));
result = -ENODEV;
goto end;
}
- resource = (struct acpi_resource *) buffer.pointer;
-
- switch (resource->id) {
- case ACPI_RSTYPE_IRQ:
- {
- struct acpi_resource_irq *p = &resource->data.irq;
- if (!p || !p->number_of_interrupts) {
- ACPI_DEBUG_PRINT((ACPI_DB_WARN,
- "Blank IRQ resource\n"));
- result = -ENODEV;
- goto end;
- }
- irq = p->interrupts[0];
- break;
- }
- case ACPI_RSTYPE_EXT_IRQ:
- {
- struct acpi_resource_ext_irq *p = &resource->data.extended_irq;
- if (!p || !p->number_of_interrupts) {
- ACPI_DEBUG_PRINT((ACPI_DB_WARN,
- "Blank IRQ resource\n"));
- result = -ENODEV;
- goto end;
- }
- irq = p->interrupts[0];
- break;
- }
- default:
- ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Resource isn't an IRQ\n"));
- result = -ENODEV;
- goto end;
- }
if (!irq) {
- ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Invalid use of IRQ 0\n"));
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "No IRQ resource found\n"));
result = -ENODEV;
goto end;
}
@@ -263,8 +270,6 @@
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Link at IRQ %d \n", link->irq.active));
end:
- acpi_os_free(buffer.pointer);
-
return_VALUE(result);
}
diff -Nru a/drivers/acpi/power.c b/drivers/acpi/power.c
--- a/drivers/acpi/power.c Tue Mar 4 19:30:09 2003
+++ b/drivers/acpi/power.c Tue Mar 4 19:30:09 2003
@@ -351,7 +351,7 @@
/*
* First we reference all power resources required in the target list
- * (e.g. so the device doesn't loose power while transitioning).
+ * (e.g. so the device doesn't lose power while transitioning).
*/
for (i=0; icount; i++) {
result = acpi_power_on(tl->handles[i]);
diff -Nru a/drivers/acpi/processor.c b/drivers/acpi/processor.c
--- a/drivers/acpi/processor.c Tue Mar 4 19:30:11 2003
+++ b/drivers/acpi/processor.c Tue Mar 4 19:30:11 2003
@@ -1560,7 +1560,7 @@
acpi_status status = 0;
union acpi_object object = {0};
struct acpi_buffer buffer = {sizeof(union acpi_object), &object};
- static int cpu_count = 0;
+ static int cpu_index = 0;
ACPI_FUNCTION_TRACE("acpi_processor_get_info");
@@ -1570,6 +1570,13 @@
if (num_online_cpus() > 1)
errata.smp = TRUE;
+ /*
+ * Extra Processor objects may be enumerated on MP systems with
+ * less than the max # of CPUs. They should be ignored.
+ */
+ if ((cpu_index + 1) > num_online_cpus())
+ return_VALUE(-ENODEV);
+
acpi_processor_errata(pr);
/*
@@ -1601,7 +1608,7 @@
* TBD: Synch processor ID (via LAPIC/LSAPIC structures) on SMP.
* >>> 'acpi_get_processor_id(acpi_id, &id)' in arch/xxx/acpi.c
*/
- pr->id = cpu_count++;
+ pr->id = cpu_index++;
pr->acpi_id = object.processor.proc_id;
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Processor [%d:%d]\n", pr->id,
@@ -1609,21 +1616,17 @@
if (!object.processor.pblk_address)
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "No PBLK (NULL address)\n"));
- else if (object.processor.pblk_length < 4)
+ else if (object.processor.pblk_length != 6)
ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Invalid PBLK length [%d]\n",
object.processor.pblk_length));
else {
pr->throttling.address = object.processor.pblk_address;
pr->throttling.duty_offset = acpi_fadt.duty_offset;
pr->throttling.duty_width = acpi_fadt.duty_width;
-
- if (object.processor.pblk_length >= 5)
- pr->power.states[ACPI_STATE_C2].address =
- object.processor.pblk_address + 4;
-
- if (object.processor.pblk_length >= 6)
- pr->power.states[ACPI_STATE_C3].address =
- object.processor.pblk_address + 5;
+ pr->power.states[ACPI_STATE_C2].address =
+ object.processor.pblk_address + 4;
+ pr->power.states[ACPI_STATE_C3].address =
+ object.processor.pblk_address + 5;
}
acpi_processor_get_power_info(pr);
diff -Nru a/drivers/acpi/resources/rsmemory.c b/drivers/acpi/resources/rsmemory.c
--- a/drivers/acpi/resources/rsmemory.c Tue Mar 4 19:30:13 2003
+++ b/drivers/acpi/resources/rsmemory.c Tue Mar 4 19:30:13 2003
@@ -278,7 +278,7 @@
/*
* Point to the place in the output buffer where the data portion will
* begin.
- * 1. Set the RESOURCE_DATA * Data to point to it's own address, then
+ * 1. Set the RESOURCE_DATA * Data to point to its own address, then
* 2. Set the pointer to the next address.
*
* NOTE: output_struct->Data is cast to u8, otherwise, this addition adds
diff -Nru a/drivers/acpi/resources/rsutils.c b/drivers/acpi/resources/rsutils.c
--- a/drivers/acpi/resources/rsutils.c Tue Mar 4 19:30:04 2003
+++ b/drivers/acpi/resources/rsutils.c Tue Mar 4 19:30:04 2003
@@ -214,6 +214,60 @@
/*******************************************************************************
*
+ * FUNCTION: acpi_rs_get_method_data
+ *
+ * PARAMETERS: Handle - a handle to the containing object
+ * ret_buffer - a pointer to a buffer structure for the
+ * results
+ *
+ * RETURN: Status
+ *
+ * DESCRIPTION: This function is called to get the _CRS or _PRS value of an
+ * object contained in an object specified by the handle passed in
+ *
+ * If the function fails an appropriate status will be returned
+ * and the contents of the callers buffer is undefined.
+ *
+ ******************************************************************************/
+
+acpi_status
+acpi_rs_get_method_data (
+ acpi_handle handle,
+ char *path,
+ struct acpi_buffer *ret_buffer)
+{
+ union acpi_operand_object *obj_desc;
+ acpi_status status;
+
+
+ ACPI_FUNCTION_TRACE ("rs_get_method_data");
+
+
+ /* Parameters guaranteed valid by caller */
+
+ /*
+ * Execute the method, no parameters
+ */
+ status = acpi_ut_evaluate_object (handle, path, ACPI_BTYPE_BUFFER, &obj_desc);
+ if (ACPI_FAILURE (status)) {
+ return_ACPI_STATUS (status);
+ }
+
+ /*
+ * Make the call to create a resource linked list from the
+ * byte stream buffer that comes back from the method
+ * execution.
+ */
+ status = acpi_rs_create_resource_list (obj_desc, ret_buffer);
+
+ /* On exit, we must delete the object returned by evaluate_object */
+
+ acpi_ut_remove_reference (obj_desc);
+ return_ACPI_STATUS (status);
+}
+
+/*******************************************************************************
+ *
* FUNCTION: acpi_rs_set_srs_method_data
*
* PARAMETERS: Handle - a handle to the containing object
diff -Nru a/drivers/acpi/resources/rsxface.c b/drivers/acpi/resources/rsxface.c
--- a/drivers/acpi/resources/rsxface.c Tue Mar 4 19:30:11 2003
+++ b/drivers/acpi/resources/rsxface.c Tue Mar 4 19:30:11 2003
@@ -212,6 +212,90 @@
/*******************************************************************************
*
+ * FUNCTION: acpi_walk_resources
+ *
+ * PARAMETERS: device_handle - a handle to the device object for the
+ * device we are querying
+ * Path - method name of the resources we want
+ * (METHOD_NAME__CRS or METHOD_NAME__PRS)
+ * user_function - called for each resource
+ * Context - passed to user_function
+ *
+ * RETURN: Status
+ *
+ * DESCRIPTION: Retrieves the current or possible resource list for the
+ * specified device. The user_function is called once for
+ * each resource in the list.
+ *
+ ******************************************************************************/
+
+acpi_status
+acpi_walk_resources (
+ acpi_handle device_handle,
+ char *path,
+ ACPI_WALK_RESOURCE_CALLBACK user_function,
+ void *context)
+{
+ acpi_status status;
+ struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL};
+ struct acpi_resource *resource;
+
+ ACPI_FUNCTION_TRACE ("acpi_walk_resources");
+
+
+ if (!device_handle ||
+ (ACPI_STRNCMP (path, METHOD_NAME__CRS, sizeof (METHOD_NAME__CRS)) &&
+ ACPI_STRNCMP (path, METHOD_NAME__PRS, sizeof (METHOD_NAME__PRS)))) {
+ return_ACPI_STATUS (AE_BAD_PARAMETER);
+ }
+
+ status = acpi_rs_get_method_data (device_handle, path, &buffer);
+ if (ACPI_FAILURE (status)) {
+ return_ACPI_STATUS (status);
+ }
+
+ resource = (struct acpi_resource *) buffer.pointer;
+ for (;;) {
+ if (!resource || resource->id == ACPI_RSTYPE_END_TAG) {
+ break;
+ }
+
+ status = user_function (resource, context);
+
+ switch (status) {
+ case AE_OK:
+ case AE_CTRL_DEPTH:
+
+ /* Just keep going */
+ status = AE_OK;
+ break;
+
+ case AE_CTRL_TERMINATE:
+
+ /* Exit now, with OK stats */
+
+ status = AE_OK;
+ goto cleanup;
+
+ default:
+
+ /* All others are valid exceptions */
+
+ goto cleanup;
+ }
+
+ resource = ACPI_NEXT_RESOURCE (resource);
+ }
+
+cleanup:
+
+ acpi_os_free (buffer.pointer);
+
+ return_ACPI_STATUS (status);
+}
+
+/*******************************************************************************
+ *
* FUNCTION: acpi_set_current_resources
*
* PARAMETERS: device_handle - a handle to the device object for the
@@ -251,4 +335,65 @@
status = acpi_rs_set_srs_method_data (device_handle, in_buffer);
return_ACPI_STATUS (status);
+}
+
+#define COPY_FIELD(out, in, field) out->field = in->field
+#define COPY_ADDRESS(out, in) \
+ COPY_FIELD(out, in, resource_type); \
+ COPY_FIELD(out, in, producer_consumer); \
+ COPY_FIELD(out, in, decode); \
+ COPY_FIELD(out, in, min_address_fixed); \
+ COPY_FIELD(out, in, max_address_fixed); \
+ COPY_FIELD(out, in, attribute); \
+ COPY_FIELD(out, in, granularity); \
+ COPY_FIELD(out, in, min_address_range); \
+ COPY_FIELD(out, in, max_address_range); \
+ COPY_FIELD(out, in, address_translation_offset); \
+ COPY_FIELD(out, in, address_length); \
+ COPY_FIELD(out, in, resource_source);
+
+/*******************************************************************************
+*
+* FUNCTION: acpi_resource_to_address64
+*
+* PARAMETERS: resource - Pointer to a resource
+* out - Pointer to the users's return
+* buffer (a struct
+* struct acpi_resource_address64)
+*
+* RETURN: Status
+*
+* DESCRIPTION: If the resource is an address16, address32, or address64,
+* copy it to the address64 return buffer. This saves the
+* caller from having to duplicate code for different-sized
+* addresses.
+*
+******************************************************************************/
+
+acpi_status
+acpi_resource_to_address64 (
+ struct acpi_resource *resource,
+ struct acpi_resource_address64 *out)
+{
+ struct acpi_resource_address16 *address16;
+ struct acpi_resource_address32 *address32;
+ struct acpi_resource_address64 *address64;
+
+ switch (resource->id) {
+ case ACPI_RSTYPE_ADDRESS16:
+ address16 = (struct acpi_resource_address16 *) &resource->data;
+ COPY_ADDRESS(out, address16);
+ break;
+ case ACPI_RSTYPE_ADDRESS32:
+ address32 = (struct acpi_resource_address32 *) &resource->data;
+ COPY_ADDRESS(out, address32);
+ break;
+ case ACPI_RSTYPE_ADDRESS64:
+ address64 = (struct acpi_resource_address64 *) &resource->data;
+ COPY_ADDRESS(out, address64);
+ break;
+ default:
+ return (AE_BAD_PARAMETER);
+ }
+ return (AE_OK);
}
diff -Nru a/drivers/acpi/sleep/main.c b/drivers/acpi/sleep/main.c
--- a/drivers/acpi/sleep/main.c Tue Mar 4 19:30:03 2003
+++ b/drivers/acpi/sleep/main.c Tue Mar 4 19:30:03 2003
@@ -183,14 +183,21 @@
status = acpi_enter_sleep_state(state);
break;
- case ACPI_STATE_S2:
#ifdef CONFIG_SOFTWARE_SUSPEND
+ case ACPI_STATE_S2:
case ACPI_STATE_S3:
do_suspend_lowlevel(0);
+ break;
#endif
+ case ACPI_STATE_S4:
+ do_suspend_lowlevel_s4bios(0);
+ break;
+ default:
+ printk(KERN_WARNING PREFIX "don't know how to handle %d state.\n", state);
break;
}
local_irq_restore(flags);
+ printk(KERN_CRIT "Back to C!\n");
return status;
}
@@ -211,21 +218,31 @@
if (state < ACPI_STATE_S1 || state > ACPI_STATE_S5)
return AE_ERROR;
+ /* Since we handle S4OS via a different path (swsusp), give up if no s4bios. */
+ if (state == ACPI_STATE_S4 && !acpi_gbl_FACS->S4bios_f)
+ return AE_ERROR;
+
+ /*
+ * TBD: S1 can be done without device_suspend. Make a CONFIG_XX
+ * to handle however when S1 failed without device_suspend.
+ */
freeze_processes(); /* device_suspend needs processes to be stopped */
/* do we have a wakeup address for S2 and S3? */
- if (state == ACPI_STATE_S2 || state == ACPI_STATE_S3) {
+ /* Here, we support only S4BIOS, those we set the wakeup address */
+ /* S4OS is only supported for now via swsusp.. */
+ if (state == ACPI_STATE_S2 || state == ACPI_STATE_S3 || ACPI_STATE_S4) {
if (!acpi_wakeup_address)
return AE_ERROR;
acpi_set_firmware_waking_vector((acpi_physical_address) acpi_wakeup_address);
}
- acpi_enter_sleep_state_prep(state);
-
status = acpi_system_save_state(state);
if (!ACPI_SUCCESS(status))
return status;
+ acpi_enter_sleep_state_prep(state);
+
/* disable interrupts and flush caches */
ACPI_DISABLE_IRQS();
ACPI_FLUSH_CPU_CACHE();
@@ -237,8 +254,8 @@
* mode. So, we run these unconditionaly to make sure we have a usable system
* no matter what.
*/
- acpi_system_restore_state(state);
acpi_leave_sleep_state(state);
+ acpi_system_restore_state(state);
/* make sure interrupts are enabled */
ACPI_ENABLE_IRQS();
@@ -267,6 +284,10 @@
if (ACPI_SUCCESS(status)) {
sleep_states[i] = 1;
printk(" S%d", i);
+ }
+ if (i == ACPI_STATE_S4 && acpi_gbl_FACS->S4bios_f) {
+ sleep_states[i] = 1;
+ printk(" S4bios");
}
}
printk(")\n");
diff -Nru a/drivers/acpi/sleep/proc.c b/drivers/acpi/sleep/proc.c
--- a/drivers/acpi/sleep/proc.c Tue Mar 4 19:30:11 2003
+++ b/drivers/acpi/sleep/proc.c Tue Mar 4 19:30:11 2003
@@ -27,8 +27,11 @@
ACPI_FUNCTION_TRACE("acpi_system_sleep_seq_show");
for (i = 0; i <= ACPI_STATE_S5; i++) {
- if (sleep_states[i])
+ if (sleep_states[i]) {
seq_printf(seq,"S%d ", i);
+ if (i == ACPI_STATE_S4 && acpi_gbl_FACS->S4bios_f)
+ seq_printf(seq, "S4bios ");
+ }
}
seq_puts(seq, "\n");
diff -Nru a/drivers/acpi/tables/tbconvrt.c b/drivers/acpi/tables/tbconvrt.c
--- a/drivers/acpi/tables/tbconvrt.c Tue Mar 4 19:30:07 2003
+++ b/drivers/acpi/tables/tbconvrt.c Tue Mar 4 19:30:07 2003
@@ -239,9 +239,8 @@
ASL_BUILD_GAS_FROM_V1_ENTRY (local_fadt->xpm1b_cnt_blk, local_fadt->pm1_cnt_len, local_fadt->V1_pm1b_cnt_blk);
ASL_BUILD_GAS_FROM_V1_ENTRY (local_fadt->xpm2_cnt_blk, local_fadt->pm2_cnt_len, local_fadt->V1_pm2_cnt_blk);
ASL_BUILD_GAS_FROM_V1_ENTRY (local_fadt->xpm_tmr_blk, local_fadt->pm_tm_len, local_fadt->V1_pm_tmr_blk);
- ASL_BUILD_GAS_FROM_V1_ENTRY (local_fadt->xgpe0_blk, local_fadt->gpe0_blk_len, local_fadt->V1_gpe0_blk);
- ASL_BUILD_GAS_FROM_V1_ENTRY (local_fadt->xgpe1_blk, local_fadt->gpe1_blk_len, local_fadt->V1_gpe1_blk);
-
+ ASL_BUILD_GAS_FROM_V1_ENTRY (local_fadt->xgpe0_blk, 0, local_fadt->V1_gpe0_blk);
+ ASL_BUILD_GAS_FROM_V1_ENTRY (local_fadt->xgpe1_blk, 0, local_fadt->V1_gpe1_blk);
}
@@ -314,14 +313,15 @@
if (!(local_fadt->xgpe0_blk.address)) {
ASL_BUILD_GAS_FROM_V1_ENTRY (local_fadt->xgpe0_blk,
- local_fadt->gpe0_blk_len, local_fadt->V1_gpe0_blk);
+ 0, local_fadt->V1_gpe0_blk);
}
if (!(local_fadt->xgpe1_blk.address)) {
ASL_BUILD_GAS_FROM_V1_ENTRY (local_fadt->xgpe1_blk,
- local_fadt->gpe1_blk_len, local_fadt->V1_gpe1_blk);
+ 0, local_fadt->V1_gpe1_blk);
}
}
+
/*******************************************************************************
*
diff -Nru a/drivers/acpi/tables.c b/drivers/acpi/tables.c
--- a/drivers/acpi/tables.c Tue Mar 4 19:30:07 2003
+++ b/drivers/acpi/tables.c Tue Mar 4 19:30:07 2003
@@ -379,6 +379,7 @@
sdt.pa = ((struct acpi20_table_rsdp*)rsdp)->xsdt_address;
+ /* map in just the header */
header = (struct acpi_table_header *)
__acpi_map_table(sdt.pa, sizeof(struct acpi_table_header));
@@ -387,6 +388,15 @@
return -ENODEV;
}
+ /* remap in the entire table before processing */
+ mapped_xsdt = (struct acpi_table_xsdt *)
+ __acpi_map_table(sdt.pa, header->length);
+ if (!mapped_xsdt) {
+ printk(KERN_WARNING PREFIX "Unable to map XSDT\n");
+ return -ENODEV;
+ }
+ header = &mapped_xsdt->header;
+
if (strncmp(header->signature, "XSDT", 4)) {
printk(KERN_WARNING PREFIX "XSDT signature incorrect\n");
return -ENODEV;
@@ -404,15 +414,6 @@
sdt.count = ACPI_MAX_TABLES;
}
- mapped_xsdt = (struct acpi_table_xsdt *)
- __acpi_map_table(sdt.pa, header->length);
- if (!mapped_xsdt) {
- printk(KERN_WARNING PREFIX "Unable to map XSDT\n");
- return -ENODEV;
- }
-
- header = &mapped_xsdt->header;
-
for (i = 0; i < sdt.count; i++)
sdt.entry[i].pa = (unsigned long) mapped_xsdt->entry[i];
}
@@ -425,6 +426,7 @@
sdt.pa = rsdp->rsdt_address;
+ /* map in just the header */
header = (struct acpi_table_header *)
__acpi_map_table(sdt.pa, sizeof(struct acpi_table_header));
if (!header) {
@@ -432,6 +434,15 @@
return -ENODEV;
}
+ /* remap in the entire table before processing */
+ mapped_rsdt = (struct acpi_table_rsdt *)
+ __acpi_map_table(sdt.pa, header->length);
+ if (!mapped_rsdt) {
+ printk(KERN_WARNING PREFIX "Unable to map RSDT\n");
+ return -ENODEV;
+ }
+ header = &mapped_rsdt->header;
+
if (strncmp(header->signature, "RSDT", 4)) {
printk(KERN_WARNING PREFIX "RSDT signature incorrect\n");
return -ENODEV;
@@ -449,15 +460,6 @@
sdt.count = ACPI_MAX_TABLES;
}
- mapped_rsdt = (struct acpi_table_rsdt *)
- __acpi_map_table(sdt.pa, header->length);
- if (!mapped_rsdt) {
- printk(KERN_WARNING PREFIX "Unable to map RSDT\n");
- return -ENODEV;
- }
-
- header = &mapped_rsdt->header;
-
for (i = 0; i < sdt.count; i++)
sdt.entry[i].pa = (unsigned long) mapped_rsdt->entry[i];
}
@@ -471,12 +473,20 @@
for (i = 0; i < sdt.count; i++) {
+ /* map in just the header */
header = (struct acpi_table_header *)
__acpi_map_table(sdt.entry[i].pa,
sizeof(struct acpi_table_header));
if (!header)
continue;
+ /* remap in the entire table before processing */
+ header = (struct acpi_table_header *)
+ __acpi_map_table(sdt.entry[i].pa,
+ header->length);
+ if (!header)
+ continue;
+
acpi_table_print(header, sdt.entry[i].pa);
if (acpi_table_compute_checksum(header, header->length)) {
diff -Nru a/drivers/acpi/utilities/utcopy.c b/drivers/acpi/utilities/utcopy.c
--- a/drivers/acpi/utilities/utcopy.c Tue Mar 4 19:30:08 2003
+++ b/drivers/acpi/utilities/utcopy.c Tue Mar 4 19:30:08 2003
@@ -645,11 +645,11 @@
/*
* Allocate and copy the actual buffer if and only if:
- * 1) There is a valid buffer (length > 0)
+ * 1) There is a valid buffer pointer
* 2) The buffer is not static (not in an ACPI table) (in this case,
* the actual pointer was already copied above)
*/
- if ((source_desc->buffer.length) &&
+ if ((source_desc->buffer.pointer) &&
(!(source_desc->common.flags & AOPOBJ_STATIC_POINTER))) {
dest_desc->buffer.pointer = ACPI_MEM_ALLOCATE (source_desc->buffer.length);
if (!dest_desc->buffer.pointer) {
@@ -665,11 +665,11 @@
/*
* Allocate and copy the actual string if and only if:
- * 1) There is a valid string (length > 0)
+ * 1) There is a valid string pointer
* 2) The string is not static (not in an ACPI table) (in this case,
* the actual pointer was already copied above)
*/
- if ((source_desc->string.length) &&
+ if ((source_desc->string.pointer) &&
(!(source_desc->common.flags & AOPOBJ_STATIC_POINTER))) {
dest_desc->string.pointer = ACPI_MEM_ALLOCATE ((acpi_size) source_desc->string.length + 1);
if (!dest_desc->string.pointer) {
diff -Nru a/drivers/acpi/utilities/utglobal.c b/drivers/acpi/utilities/utglobal.c
--- a/drivers/acpi/utilities/utglobal.c Tue Mar 4 19:30:04 2003
+++ b/drivers/acpi/utilities/utglobal.c Tue Mar 4 19:30:04 2003
@@ -729,6 +729,10 @@
acpi_gbl_acpi_mutex_info[i].use_count = 0;
}
+ /* GPE support */
+
+ acpi_gbl_gpe_block_list_head = NULL;
+
/* Global notify handlers */
acpi_gbl_sys_notify.handler = NULL;
@@ -766,8 +770,6 @@
/* Hardware oriented */
- acpi_gbl_gpe_register_info = NULL;
- acpi_gbl_gpe_number_info = NULL;
acpi_gbl_events_initialized = FALSE;
/* Namespace */
diff -Nru a/drivers/atm/firestream.c b/drivers/atm/firestream.c
--- a/drivers/atm/firestream.c Tue Mar 4 19:30:04 2003
+++ b/drivers/atm/firestream.c Tue Mar 4 19:30:04 2003
@@ -105,7 +105,7 @@
The FS50 CAM (VP/VC match registers) always take the lowest channel
number that matches. This is not a problem.
- However, they also ignore wether the channel is enabled or
+ However, they also ignore whether the channel is enabled or
not. This means that if you allocate channel 0 to 1.2 and then
channel 1 to 0.0, then disabeling channel 0 and writing 0 to the
match channel for channel 0 will "steal" the traffic from channel
diff -Nru a/drivers/atm/fore200e.c b/drivers/atm/fore200e.c
--- a/drivers/atm/fore200e.c Tue Mar 4 19:30:14 2003
+++ b/drivers/atm/fore200e.c Tue Mar 4 19:30:14 2003
@@ -1132,8 +1132,7 @@
return;
}
- do_gettimeofday(&vcc->timestamp);
- skb->stamp = vcc->timestamp;
+ do_gettimeofday(&skb->stamp);
#ifdef FORE200E_52BYTE_AAL0_SDU
if (cell_header) {
diff -Nru a/drivers/atm/horizon.c b/drivers/atm/horizon.c
--- a/drivers/atm/horizon.c Tue Mar 4 19:30:03 2003
+++ b/drivers/atm/horizon.c Tue Mar 4 19:30:03 2003
@@ -2874,11 +2874,7 @@
// writes to adapter memory (handles IRQ and SMP)
spin_lock_init (&dev->mem_lock);
-#if LINUX_VERSION_CODE >= 0x20303
init_waitqueue_head (&dev->tx_queue);
-#else
- dev->tx_queue = 0;
-#endif
// vpi in 0..4, vci in 6..10
dev->atm_dev->ci_range.vpi_bits = vpi_bits;
diff -Nru a/drivers/atm/horizon.h b/drivers/atm/horizon.h
--- a/drivers/atm/horizon.h Tue Mar 4 19:30:13 2003
+++ b/drivers/atm/horizon.h Tue Mar 4 19:30:13 2003
@@ -422,11 +422,7 @@
unsigned int tx_regions; // number of remaining regions
spinlock_t mem_lock;
-#if LINUX_VERSION_CODE >= 0x20303
wait_queue_head_t tx_queue;
-#else
- struct wait_queue * tx_queue;
-#endif
u8 irq;
long flags;
diff -Nru a/drivers/atm/iphase.c b/drivers/atm/iphase.c
--- a/drivers/atm/iphase.c Tue Mar 4 19:30:13 2003
+++ b/drivers/atm/iphase.c Tue Mar 4 19:30:13 2003
@@ -436,7 +436,7 @@
if (crm == 0) crm = 1;
f_abr_vc->f_crm = crm & 0xff;
f_abr_vc->f_pcr = cellrate_to_float(srv_p->pcr);
- icr = MIN( srv_p->icr, (srv_p->tbe > srv_p->frtt) ?
+ icr = min( srv_p->icr, (srv_p->tbe > srv_p->frtt) ?
((srv_p->tbe/srv_p->frtt)*1000000) :
(1000000/(srv_p->frtt/srv_p->tbe)));
f_abr_vc->f_icr = cellrate_to_float(icr);
@@ -2071,7 +2071,7 @@
- UBR Table size is 4K
- UBR wait queue is 4K
since the table and wait queues are contiguous, all the bytes
- can be intialized by one memeset.
+ can be initialized by one memeset.
*/
vcsize_sel = 0;
diff -Nru a/drivers/atm/iphase.h b/drivers/atm/iphase.h
--- a/drivers/atm/iphase.h Tue Mar 4 19:30:05 2003
+++ b/drivers/atm/iphase.h Tue Mar 4 19:30:05 2003
@@ -808,7 +808,6 @@
} r_vc_abr_entry;
#define MRM 3
-#define MIN(x,y) ((x) < (y)) ? (x) : (y)
typedef struct srv_cls_param {
u32 class_type; /* CBR/VBR/ABR/UBR; use the enum above */
@@ -1017,13 +1016,8 @@
spinlock_t tx_lock;
IARTN_Q tx_return_q;
u32 close_pending;
-#if LINUX_VERSION_CODE >= 0x20303
wait_queue_head_t close_wait;
wait_queue_head_t timeout_wait;
-#else
- struct wait_queue *close_wait;
- struct wait_queue *timeout_wait;
-#endif
struct cpcs_trailer_desc *tx_buf;
u16 num_tx_desc, tx_buf_sz, rate_limit;
u32 tx_cell_cnt, tx_pkt_cnt;
diff -Nru a/drivers/atm/lanai.c b/drivers/atm/lanai.c
--- a/drivers/atm/lanai.c Tue Mar 4 19:30:05 2003
+++ b/drivers/atm/lanai.c Tue Mar 4 19:30:05 2003
@@ -1300,7 +1300,7 @@
#define DESCRIPTOR_AAL5_STREAM (0x00004000)
#define DESCRIPTOR_CLP (0x00002000)
-/* Add 32-bit descriptor with it's padding */
+/* Add 32-bit descriptor with its padding */
static inline void vcc_tx_add_aal5_descriptor(struct lanai_vcc *lvcc,
u32 flags, int len)
{
diff -Nru a/drivers/atm/suni.c b/drivers/atm/suni.c
--- a/drivers/atm/suni.c Tue Mar 4 19:30:12 2003
+++ b/drivers/atm/suni.c Tue Mar 4 19:30:12 2003
@@ -233,8 +233,6 @@
if (!(PRIV(dev) = kmalloc(sizeof(struct suni_priv),GFP_KERNEL)))
return -ENOMEM;
- MOD_INC_USE_COUNT;
-
PRIV(dev)->dev = dev;
spin_lock_irqsave(&sunis_lock,flags);
first = !sunis;
@@ -280,7 +278,6 @@
spin_unlock_irqrestore(&sunis_lock,flags);
kfree(PRIV(dev));
- MOD_DEC_USE_COUNT;
return 0;
}
@@ -293,7 +290,7 @@
};
-int __init suni_init(struct atm_dev *dev)
+int suni_init(struct atm_dev *dev)
{
unsigned char mri;
diff -Nru a/drivers/base/Makefile b/drivers/base/Makefile
--- a/drivers/base/Makefile Tue Mar 4 19:30:13 2003
+++ b/drivers/base/Makefile Tue Mar 4 19:30:13 2003
@@ -2,7 +2,7 @@
obj-y := core.o sys.o interface.o power.o bus.o \
driver.o class.o intf.o platform.o \
- cpu.o firmware.o
+ cpu.o firmware.o init.o
obj-$(CONFIG_NUMA) += node.o memblk.o
obj-y += fs/
obj-$(CONFIG_HOTPLUG) += hotplug.o
diff -Nru a/drivers/base/base.h b/drivers/base/base.h
--- a/drivers/base/base.h Tue Mar 4 19:30:07 2003
+++ b/drivers/base/base.h Tue Mar 4 19:30:07 2003
@@ -1,6 +1,7 @@
#undef DEBUG
extern struct semaphore device_sem;
+extern struct semaphore devclass_sem;
extern int bus_add_device(struct device * dev);
extern void bus_remove_device(struct device * dev);
diff -Nru a/drivers/base/bus.c b/drivers/base/bus.c
--- a/drivers/base/bus.c Tue Mar 4 19:30:05 2003
+++ b/drivers/base/bus.c Tue Mar 4 19:30:05 2003
@@ -459,7 +459,7 @@
* @drv: driver.
*
* Detach the driver from the devices it controls, and remove
- * it from it's bus's list of drivers. Finally, we drop the reference
+ * it from its bus's list of drivers. Finally, we drop the reference
* to the bus we took in bus_add_driver().
*/
@@ -544,12 +544,11 @@
subsystem_unregister(&bus->subsys);
}
-static int __init bus_subsys_init(void)
+int __init buses_init(void)
{
return subsystem_register(&bus_subsys);
}
-core_initcall(bus_subsys_init);
EXPORT_SYMBOL(bus_for_each_dev);
EXPORT_SYMBOL(bus_for_each_drv);
diff -Nru a/drivers/base/class.c b/drivers/base/class.c
--- a/drivers/base/class.c Tue Mar 4 19:30:14 2003
+++ b/drivers/base/class.c Tue Mar 4 19:30:14 2003
@@ -13,6 +13,8 @@
#define to_class_attr(_attr) container_of(_attr,struct devclass_attribute,attr)
#define to_class(obj) container_of(obj,struct device_class,subsys.kset.kobj)
+DECLARE_MUTEX(devclass_sem);
+
static ssize_t
devclass_attr_show(struct kobject * kobj, struct attribute * attr, char * buf)
{
@@ -163,29 +165,34 @@
struct device_class * cls;
int error = 0;
+ down(&devclass_sem);
if (dev->driver) {
cls = get_devclass(dev->driver->devclass);
- if (cls) {
- down_write(&cls->subsys.rwsem);
- pr_debug("device class %s: adding device %s\n",
- cls->name,dev->name);
- if (cls->add_device)
- error = cls->add_device(dev);
- if (!error) {
- enum_device(cls,dev);
- interface_add_dev(dev);
- }
-
- list_add_tail(&dev->class_list,&cls->devices.list);
-
- /* notify userspace (call /sbin/hotplug) */
- class_hotplug (dev, "add");
-
- up_write(&cls->subsys.rwsem);
- if (error)
- put_devclass(cls);
+
+ if (!cls)
+ goto Done;
+
+ pr_debug("device class %s: adding device %s\n",
+ cls->name,dev->name);
+ if (cls->add_device)
+ error = cls->add_device(dev);
+ if (error) {
+ put_devclass(cls);
+ goto Done;
}
+
+ down_write(&cls->subsys.rwsem);
+ enum_device(cls,dev);
+ list_add_tail(&dev->class_list,&cls->devices.list);
+ /* notify userspace (call /sbin/hotplug) */
+ class_hotplug (dev, "add");
+
+ up_write(&cls->subsys.rwsem);
+
+ interface_add_dev(dev);
}
+ Done:
+ up(&devclass_sem);
return error;
}
@@ -193,26 +200,33 @@
{
struct device_class * cls;
+ down(&devclass_sem);
if (dev->driver) {
cls = dev->driver->devclass;
- if (cls) {
- down_write(&cls->subsys.rwsem);
- pr_debug("device class %s: removing device %s\n",
- cls->name,dev->name);
- interface_remove_dev(dev);
- unenum_device(cls,dev);
-
- list_del(&dev->class_list);
-
- /* notify userspace (call /sbin/hotplug) */
- class_hotplug (dev, "remove");
-
- if (cls->remove_device)
- cls->remove_device(dev);
- up_write(&cls->subsys.rwsem);
- put_devclass(cls);
- }
+ if (!cls)
+ goto Done;
+
+ interface_remove_dev(dev);
+
+ down_write(&cls->subsys.rwsem);
+ pr_debug("device class %s: removing device %s\n",
+ cls->name,dev->name);
+
+ unenum_device(cls,dev);
+
+ list_del(&dev->class_list);
+
+ /* notify userspace (call /sbin/hotplug) */
+ class_hotplug (dev, "remove");
+
+ up_write(&cls->subsys.rwsem);
+
+ if (cls->remove_device)
+ cls->remove_device(dev);
+ put_devclass(cls);
}
+ Done:
+ up(&devclass_sem);
}
struct device_class * get_devclass(struct device_class * cls)
@@ -252,12 +266,10 @@
subsystem_unregister(&cls->subsys);
}
-static int __init class_subsys_init(void)
+int __init classes_init(void)
{
return subsystem_register(&class_subsys);
}
-
-core_initcall(class_subsys_init);
EXPORT_SYMBOL(devclass_create_file);
EXPORT_SYMBOL(devclass_remove_file);
diff -Nru a/drivers/base/core.c b/drivers/base/core.c
--- a/drivers/base/core.c Tue Mar 4 19:30:04 2003
+++ b/drivers/base/core.c Tue Mar 4 19:30:04 2003
@@ -143,7 +143,6 @@
INIT_LIST_HEAD(&dev->driver_list);
INIT_LIST_HEAD(&dev->bus_list);
INIT_LIST_HEAD(&dev->class_list);
- INIT_LIST_HEAD(&dev->intf_list);
}
/**
@@ -310,12 +309,10 @@
put_device(dev);
}
-static int __init device_subsys_init(void)
+int __init devices_init(void)
{
return subsystem_register(&devices_subsys);
}
-
-core_initcall(device_subsys_init);
EXPORT_SYMBOL(device_initialize);
EXPORT_SYMBOL(device_add);
diff -Nru a/drivers/base/cpu.c b/drivers/base/cpu.c
--- a/drivers/base/cpu.c Tue Mar 4 19:30:05 2003
+++ b/drivers/base/cpu.c Tue Mar 4 19:30:05 2003
@@ -46,9 +46,8 @@
}
-static int __init register_cpu_type(void)
+int __init cpu_dev_init(void)
{
devclass_register(&cpu_devclass);
return driver_register(&cpu_driver);
}
-postcore_initcall(register_cpu_type);
diff -Nru a/drivers/base/firmware.c b/drivers/base/firmware.c
--- a/drivers/base/firmware.c Tue Mar 4 19:30:09 2003
+++ b/drivers/base/firmware.c Tue Mar 4 19:30:09 2003
@@ -19,12 +19,10 @@
subsystem_unregister(s);
}
-static int __init firmware_init(void)
+int __init firmware_init(void)
{
return subsystem_register(&firmware_subsys);
}
-
-core_initcall(firmware_init);
EXPORT_SYMBOL(firmware_register);
EXPORT_SYMBOL(firmware_unregister);
diff -Nru a/drivers/base/hotplug.c b/drivers/base/hotplug.c
--- a/drivers/base/hotplug.c Tue Mar 4 19:30:03 2003
+++ b/drivers/base/hotplug.c Tue Mar 4 19:30:03 2003
@@ -17,6 +17,8 @@
#include
#include
#include
+#include
+#include
#include "base.h"
#include "fs/fs.h"
diff -Nru a/drivers/base/init.c b/drivers/base/init.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/drivers/base/init.c Tue Mar 4 19:30:14 2003
@@ -0,0 +1,34 @@
+
+#include
+#include
+
+extern int devices_init(void);
+extern int buses_init(void);
+extern int classes_init(void);
+extern int firmware_init(void);
+extern int platform_bus_init(void);
+extern int sys_bus_init(void);
+extern int cpu_dev_init(void);
+
+/**
+ * driver_init - initialize driver model.
+ *
+ * Call the driver model init functions to initialize their
+ * subsystems. Called early from init/main.c.
+ */
+
+void __init driver_init(void)
+{
+ /* These are the core pieces */
+ devices_init();
+ buses_init();
+ classes_init();
+ firmware_init();
+
+ /* These are also core pieces, but must come after the
+ * core core pieces.
+ */
+ platform_bus_init();
+ sys_bus_init();
+ cpu_dev_init();
+}
diff -Nru a/drivers/base/intf.c b/drivers/base/intf.c
--- a/drivers/base/intf.c Tue Mar 4 19:30:13 2003
+++ b/drivers/base/intf.c Tue Mar 4 19:30:13 2003
@@ -12,80 +12,31 @@
#define to_intf(node) container_of(node,struct device_interface,kset.kobj.entry)
-#define to_data(e) container_of(e,struct intf_data,kobj.entry)
+#define to_dev(d) container_of(d,struct device,class_list)
/**
* intf_dev_link - create sysfs symlink for interface.
- * @data: interface data descriptor.
+ * @intf: interface.
+ * @dev: device.
*
* Create a symlink 'phys' in the interface's directory to
*/
-static int intf_dev_link(struct intf_data * data)
+static int intf_dev_link(struct device_interface * intf, struct device * dev)
{
- char name[16];
- snprintf(name,16,"%d",data->intf_num);
- return sysfs_create_link(&data->intf->kset.kobj,&data->dev->kobj,name);
+ return sysfs_create_link(&intf->kset.kobj,&dev->kobj,dev->bus_id);
}
/**
* intf_dev_unlink - remove symlink for interface.
- * @intf: interface data descriptor.
- *
- */
-
-static void intf_dev_unlink(struct intf_data * data)
-{
- char name[16];
- snprintf(name,16,"%d",data->intf_num);
- sysfs_remove_link(&data->intf->kset.kobj,name);
-}
-
-
-/**
- * interface_add_data - attach data descriptor
- * @data: interface data descriptor.
- *
- * This attaches the per-instance interface object to the
- * interface (by registering its kobject) and the device
- * itself (by inserting it into the device's list).
- *
- * Note that there is no explicit protection done in this
- * function. This should be called from the interface's
- * add_device() method, which is called under the protection
- * of the class's rwsem.
- */
-
-int interface_add_data(struct intf_data * data)
-{
- struct device_interface * intf = data->intf;
-
- if (intf) {
- data->intf_num = intf->devnum++;
- data->kobj.kset = &intf->kset;
- kobject_register(&data->kobj);
-
- list_add_tail(&data->dev_entry,&data->dev->intf_list);
- return intf_dev_link(data);
- }
- return -EINVAL;
-}
-
-
-/**
- * interface_remove_data - detach data descriptor.
- * @data: interface data descriptor.
+ * @intf: interface.
+ * @dev: device.
*
- * This detaches the per-instance data descriptor by removing
- * it from the device's list and unregistering the kobject from
- * the subsystem.
*/
-void interface_remove_data(struct intf_data * data)
+static void intf_dev_unlink(struct device_interface * intf, struct device * dev)
{
- intf_dev_unlink(data);
- list_del_init(&data->dev_entry);
- kobject_unregister(&data->kobj);
+ sysfs_remove_link(&intf->kset.kobj,dev->bus_id);
}
@@ -103,33 +54,28 @@
{
int error = 0;
- if (intf->add_device)
- error = intf->add_device(dev);
+ if (intf->add_device) {
+ if (!(error = intf->add_device(dev)))
+ intf_dev_link(intf,dev);
+ }
pr_debug(" -> %s (%d)\n",dev->bus_id,error);
return error;
}
/**
* del - detach device from interface.
- * @data: interface data descriptor.
- *
- * Another simple helper. Remove the data descriptor from
- * the device and the interface, then call the interface's
- * remove_device() method.
+ * @intf: interface.
+ * @dev: device.
*/
-static void del(struct intf_data * data)
+static void del(struct device_interface * intf, struct device * dev)
{
- struct device_interface * intf = data->intf;
-
pr_debug(" -> %s ",intf->name);
- interface_remove_data(data);
if (intf->remove_device)
- intf->remove_device(data);
+ intf->remove_device(dev);
+ intf_dev_unlink(intf,dev);
}
-#define to_dev(entry) container_of(entry,struct device,class_list)
-
/**
* add_intf - add class's devices to interface.
@@ -145,10 +91,8 @@
struct device_class * cls = intf->devclass;
struct list_head * entry;
- down_write(&cls->subsys.rwsem);
list_for_each(entry,&cls->devices.list)
add(intf,to_dev(entry));
- up_write(&cls->subsys.rwsem);
}
/**
@@ -164,6 +108,7 @@
{
struct device_class * cls = get_devclass(intf->devclass);
+ down(&devclass_sem);
if (cls) {
pr_debug("register interface '%s' with class '%s'\n",
intf->name,cls->name);
@@ -173,6 +118,7 @@
kset_register(&intf->kset);
add_intf(intf);
}
+ up(&devclass_sem);
return 0;
}
@@ -188,14 +134,13 @@
static void del_intf(struct device_interface * intf)
{
+ struct device_class * cls = intf->devclass;
struct list_head * entry;
- down_write(&intf->devclass->subsys.rwsem);
- list_for_each(entry,&intf->kset.list) {
- struct intf_data * data = to_data(entry);
- del(data);
+ list_for_each(entry,&cls->devices.list) {
+ struct device * dev = to_dev(entry);
+ del(intf,dev);
}
- up_write(&intf->devclass->subsys.rwsem);
}
/**
@@ -210,6 +155,8 @@
void interface_unregister(struct device_interface * intf)
{
struct device_class * cls = intf->devclass;
+
+ down(&devclass_sem);
if (cls) {
pr_debug("unregistering interface '%s' from class '%s'\n",
intf->name,cls->name);
@@ -217,6 +164,7 @@
kset_unregister(&intf->kset);
put_devclass(cls);
}
+ up(&devclass_sem);
}
@@ -255,20 +203,21 @@
* This is another helper for the class driver core, and called
* when the device is being removed from the class.
*
- * We iterate over the list of interface data descriptors attached
- * to the device, and call del() [above] for each. Again, the
- * class's rwsem is assumed to be held during this.
+ * We iterate over the list of the class's devices and call del()
+ * [above] for each. Again, the class's rwsem is _not_ held, but
+ * the devclass_sem is (see class.c).
*/
void interface_remove_dev(struct device * dev)
{
struct list_head * entry, * next;
+ struct device_class * cls = dev->driver->devclass;
pr_debug("interfaces: removing device %s\n",dev->name);
- list_for_each_safe(entry,next,&dev->intf_list) {
- struct intf_data * intf_data = to_data(entry);
- del(intf_data);
+ list_for_each_safe(entry,next,&cls->subsys.kset.list) {
+ struct device_interface * intf = to_intf(entry);
+ del(intf,dev);
}
}
diff -Nru a/drivers/base/platform.c b/drivers/base/platform.c
--- a/drivers/base/platform.c Tue Mar 4 19:30:04 2003
+++ b/drivers/base/platform.c Tue Mar 4 19:30:04 2003
@@ -41,9 +41,29 @@
if (pdev)
device_unregister(&pdev->dev);
}
-
+
+
+/**
+ * platform_match - bind platform device to platform driver.
+ * @dev: device.
+ * @drv: driver.
+ *
+ * Platform device IDs are assumed to be encoded like this:
+ * "", where is a short description of the
+ * type of device, like "pci" or "floppy", and is the
+ * enumerated instance of the device, like '0' or '42'.
+ * Driver IDs are simply "".
+ * So, extract the from the device, and compare it against
+ * the name of the driver. Return whether they match or not.
+ */
+
static int platform_match(struct device * dev, struct device_driver * drv)
{
+ char name[BUS_ID_SIZE];
+
+ if (sscanf(dev->bus_id,"%s",name))
+ return (strcmp(name,drv->name) == 0);
+
return 0;
}
@@ -52,13 +72,11 @@
.match = platform_match,
};
-static int __init platform_bus_init(void)
+int __init platform_bus_init(void)
{
device_register(&legacy_bus);
return bus_register(&platform_bus_type);
}
-
-postcore_initcall(platform_bus_init);
EXPORT_SYMBOL(platform_device_register);
EXPORT_SYMBOL(platform_device_unregister);
diff -Nru a/drivers/base/sys.c b/drivers/base/sys.c
--- a/drivers/base/sys.c Tue Mar 4 19:30:03 2003
+++ b/drivers/base/sys.c Tue Mar 4 19:30:03 2003
@@ -138,13 +138,12 @@
.name = "system",
};
-static int sys_bus_init(void)
+int __init sys_bus_init(void)
{
bus_register(&system_bus_type);
return device_register(&system_bus);
}
-postcore_initcall(sys_bus_init);
EXPORT_SYMBOL(system_bus_type);
EXPORT_SYMBOL(sys_device_register);
EXPORT_SYMBOL(sys_device_unregister);
diff -Nru a/drivers/block/cciss.c b/drivers/block/cciss.c
--- a/drivers/block/cciss.c Tue Mar 4 19:30:14 2003
+++ b/drivers/block/cciss.c Tue Mar 4 19:30:14 2003
@@ -87,7 +87,11 @@
};
/* How long to wait (in millesconds) for board to go into simple mode */
-#define MAX_CONFIG_WAIT 1000
+#define MAX_CONFIG_WAIT 30000
+#define MAX_IOCTL_CONFIG_WAIT 1000
+
+/*define how many times we will try a command because of bus resets */
+#define MAX_CMD_RETRIES 3
#define READ_AHEAD 128
#define NR_CMDS 384 /* #commands that can be outstanding */
@@ -116,7 +120,7 @@
static void start_io( ctlr_info_t *h);
static int sendcmd( __u8 cmd, int ctlr, void *buff, size_t size,
unsigned int use_unit_num, unsigned int log_unit, __u8 page_code,
- unsigned char *scsi3addr);
+ unsigned char *scsi3addr, int cmd_type);
#ifdef CONFIG_PROC_FS
static int cciss_proc_get_info(char *buffer, char **start, off_t offset,
@@ -351,7 +355,7 @@
if (ctlr >= MAX_CTLR || hba[ctlr] == NULL)
return -ENXIO;
/*
- * Root is allowed to open raw volume zero even if its not configured
+ * Root is allowed to open raw volume zero even if it's not configured
* so array config can still work. I don't think I really like this,
* but I'm already using way to many device nodes to claim another one
* for "raw controller".
@@ -467,8 +471,7 @@
&(c->cfgtable->HostWrite.CoalIntCount));
writel( CFGTBL_ChangeReq, c->vaddr + SA5_DOORBELL);
- for(i=0;ivaddr + SA5_DOORBELL)
& CFGTBL_ChangeReq))
break;
@@ -476,8 +479,8 @@
udelay(1000);
}
spin_unlock_irqrestore(CCISS_LOCK(ctlr), flags);
- if (i >= MAX_CONFIG_WAIT)
- return( -EFAULT);
+ if (i >= MAX_IOCTL_CONFIG_WAIT)
+ return -EAGAIN;
return(0);
}
case CCISS_GETNODENAME:
@@ -514,8 +517,7 @@
writel( CFGTBL_ChangeReq, c->vaddr + SA5_DOORBELL);
- for(i=0;ivaddr + SA5_DOORBELL)
& CFGTBL_ChangeReq))
break;
@@ -523,8 +525,8 @@
udelay(1000);
}
spin_unlock_irqrestore(CCISS_LOCK(ctlr), flags);
- if (i >= MAX_CONFIG_WAIT)
- return( -EFAULT);
+ if (i >= MAX_IOCTL_CONFIG_WAIT)
+ return -EAGAIN;
return(0);
}
@@ -575,6 +577,24 @@
case CCISS_REVALIDVOLS:
return( revalidate_allvol(inode->i_rdev));
+ case CCISS_GETLUNINFO: {
+ LogvolInfo_struct luninfo;
+ struct gendisk *disk = hba[ctlr]->gendisk[dsk];
+ drive_info_struct *drv = &hba[ctlr]->drv[dsk];
+ int i;
+
+ luninfo.LunID = drv->LunID;
+ luninfo.num_opens = drv->usage_count;
+ luninfo.num_parts = 0;
+ /* count partitions 1 to 15 with sizes > 0 */
+ for(i=1; i part[i].nr_sects != 0)
+ luninfo.num_parts++;
+ if (copy_to_user((void *) arg, &luninfo,
+ sizeof(LogvolInfo_struct)))
+ return -EFAULT;
+ return(0);
+ }
case CCISS_DEREGDISK:
return( deregister_disk(ctlr,dsk));
@@ -696,7 +716,153 @@
cmd_free(h, c, 0);
return(0);
}
+ case CCISS_BIG_PASSTHRU: {
+ BIG_IOCTL_Command_struct *ioc;
+ ctlr_info_t *h = hba[ctlr];
+ CommandList_struct *c;
+ unsigned char **buff = NULL;
+ int *buff_size = NULL;
+ u64bit temp64;
+ unsigned long flags;
+ BYTE sg_used = 0;
+ int status = 0;
+ int i;
+ DECLARE_COMPLETION(wait);
+ __u32 left;
+ __u32 sz;
+ BYTE *data_ptr;
+ if (!arg)
+ return -EINVAL;
+ if (!capable(CAP_SYS_RAWIO))
+ return -EPERM;
+ ioc = (BIG_IOCTL_Command_struct *)
+ kmalloc(sizeof(*ioc), GFP_KERNEL);
+ if (!ioc) {
+ status = -ENOMEM;
+ goto cleanup1;
+ }
+ if (copy_from_user(ioc, (void *) arg, sizeof(*ioc)))
+ return -EFAULT;
+ if ((ioc->buf_size < 1) &&
+ (ioc->Request.Type.Direction != XFER_NONE))
+ return -EINVAL;
+ /* Check kmalloc limits using all SGs */
+ if (ioc->malloc_size > MAX_KMALLOC_SIZE)
+ return -EINVAL;
+ if (ioc->buf_size > ioc->malloc_size * MAXSGENTRIES)
+ return -EINVAL;
+ buff = (unsigned char **) kmalloc(MAXSGENTRIES *
+ sizeof(char *), GFP_KERNEL);
+ if (!buff) {
+ status = -ENOMEM;
+ goto cleanup1;
+ }
+ memset(buff, 0, MAXSGENTRIES);
+ buff_size = (int *) kmalloc(MAXSGENTRIES * sizeof(int),
+ GFP_KERNEL);
+ if (!buff_size) {
+ status = -ENOMEM;
+ goto cleanup1;
+ }
+ left = ioc->buf_size;
+ data_ptr = (BYTE *) ioc->buf;
+ while (left) {
+ sz = (left > ioc->malloc_size) ? ioc->malloc_size : left;
+ buff_size[sg_used] = sz;
+ buff[sg_used] = kmalloc(sz, GFP_KERNEL);
+ if (buff[sg_used] == NULL) {
+ status = -ENOMEM;
+ goto cleanup1;
+ }
+ if (ioc->Request.Type.Direction == XFER_WRITE &&
+ copy_from_user(buff[sg_used], data_ptr, sz)) {
+ status = -ENOMEM;
+ goto cleanup1;
+ }
+ left -= sz;
+ data_ptr += sz;
+ sg_used++;
+ }
+ if ((c = cmd_alloc(h , 0)) == NULL) {
+ status = -ENOMEM;
+ goto cleanup1;
+ }
+ c->cmd_type = CMD_IOCTL_PEND;
+ c->Header.ReplyQueue = 0;
+
+ if( ioc->buf_size > 0) {
+ c->Header.SGList = sg_used;
+ c->Header.SGTotal= sg_used;
+ } else {
+ c->Header.SGList = 0;
+ c->Header.SGTotal= 0;
+ }
+ c->Header.LUN = ioc->LUN_info;
+ c->Header.Tag.lower = c->busaddr;
+
+ c->Request = ioc->Request;
+ if (ioc->buf_size > 0 ) {
+ int i;
+ for(i=0; ipdev, buff[i],
+ buff_size[i],
+ PCI_DMA_BIDIRECTIONAL);
+ c->SG[i].Addr.lower = temp64.val32.lower;
+ c->SG[i].Addr.upper = temp64.val32.upper;
+ c->SG[i].Len = buff_size[i];
+ c->SG[i].Ext = 0; /* we are not chaining */
+ }
+ }
+ c->waiting = &wait;
+ /* Put the request on the tail of the request queue */
+ spin_lock_irqsave(CCISS_LOCK(ctlr), flags);
+ addQ(&h->reqQ, c);
+ h->Qdepth++;
+ start_io(h);
+ spin_unlock_irqrestore(CCISS_LOCK(ctlr), flags);
+ wait_for_completion(&wait);
+ /* unlock the buffers from DMA */
+ for(i=0; iSG[i].Addr.lower;
+ temp64.val32.upper = c->SG[i].Addr.upper;
+ pci_unmap_single( h->pdev, (dma_addr_t) temp64.val,
+ buff_size[i], PCI_DMA_BIDIRECTIONAL);
+ }
+ /* Copy the error information out */
+ ioc->error_info = *(c->err_info);
+ if (copy_to_user((void *) arg, ioc, sizeof(*ioc))) {
+ cmd_free(h, c, 0);
+ status = -EFAULT;
+ goto cleanup1;
+ }
+ if (ioc->Request.Type.Direction == XFER_READ) {
+ /* Copy the data out of the buffer we created */
+ BYTE *ptr = (BYTE *) ioc->buf;
+ for(i=0; i< sg_used; i++) {
+ if (copy_to_user(ptr, buff[i], buff_size[i])) {
+ cmd_free(h, c, 0);
+ status = -EFAULT;
+ goto cleanup1;
+ }
+ ptr += buff_size[i];
+ }
+ }
+ cmd_free(h, c, 0);
+ status = 0;
+cleanup1:
+ if (buff) {
+ for(i=0; idrv[logvol].LunID = 0;
return(0);
}
-static int sendcmd_withirq(__u8 cmd,
- int ctlr,
- void *buff,
- size_t size,
- unsigned int use_unit_num,
- unsigned int log_unit,
- __u8 page_code )
+static int fill_cmd(CommandList_struct *c, __u8 cmd, int ctlr, void *buff,
+ size_t size,
+ unsigned int use_unit_num, /* 0: address the controller,
+ 1: address logical volume log_unit,
+ 2: periph device address is scsi3addr */
+ unsigned int log_unit, __u8 page_code, unsigned char *scsi3addr,
+ int cmd_type)
{
- ctlr_info_t *h = hba[ctlr];
- CommandList_struct *c;
- u64bit buff_dma_handle;
- unsigned long flags;
- int return_status = IO_OK;
- DECLARE_COMPLETION(wait);
-
- if ((c = cmd_alloc(h , 0)) == NULL)
- {
- return -ENOMEM;
- }
- // Fill in the command type
+ ctlr_info_t *h= hba[ctlr];
+ u64bit buff_dma_handle;
+ int status = IO_OK;
+
c->cmd_type = CMD_IOCTL_PEND;
- // Fill in Command Header
- c->Header.ReplyQueue = 0; // unused in simple mode
- if( buff != NULL) // buffer to fill
- {
+ c->Header.ReplyQueue = 0;
+ if( buff != NULL) {
c->Header.SGList = 1;
c->Header.SGTotal= 1;
- } else // no buffers to fill
- {
+ } else {
c->Header.SGList = 0;
c->Header.SGTotal= 0;
}
- c->Header.Tag.lower = c->busaddr; // use the kernel address the cmd block for tag
- // Fill in Request block
- switch(cmd)
- {
+ c->Header.Tag.lower = c->busaddr;
+
+ c->Request.Type.Type = cmd_type;
+ if (cmd_type == TYPE_CMD) {
+ switch(cmd) {
case CISS_INQUIRY:
/* If the logical unit number is 0 then, this is going
- to controller so It's a physical command
- mode = 0 target = 0.
- So we have nothing to write.
- Otherwise
- mode = 1 target = LUNID
- */
- if(use_unit_num != 0)
- {
+ to controller so It's a physical command
+ mode = 0 target = 0. So we have nothing to write.
+ otherwise, if use_unit_num == 1,
+ mode = 1(volume set addressing) target = LUNID
+ otherwise, if use_unit_num == 2,
+ mode = 0(periph dev addr) target = scsi3addr */
+ if (use_unit_num == 1) {
c->Header.LUN.LogDev.VolId=
- hba[ctlr]->drv[log_unit].LunID;
+ h->drv[log_unit].LunID;
c->Header.LUN.LogDev.Mode = 1;
+ } else if (use_unit_num == 2) {
+ memcpy(c->Header.LUN.LunAddrBytes,scsi3addr,8);
+ c->Header.LUN.LogDev.Mode = 0;
}
- if(page_code != 0)
- {
+ /* are we trying to read a vital product page */
+ if(page_code != 0) {
c->Request.CDB[1] = 0x01;
c->Request.CDB[2] = page_code;
}
c->Request.CDBLen = 6;
- c->Request.Type.Type = TYPE_CMD; // It is a command.
c->Request.Type.Attribute = ATTR_SIMPLE;
- c->Request.Type.Direction = XFER_READ; // Read
- c->Request.Timeout = 0; // Don't time out
+ c->Request.Type.Direction = XFER_READ;
+ c->Request.Timeout = 0;
c->Request.CDB[0] = CISS_INQUIRY;
c->Request.CDB[4] = size & 0xFF;
break;
- case CISS_REPORT_LOG:
+ case CISS_REPORT_LOG:
+ case CISS_REPORT_PHYS:
/* Talking to controller so It's a physical command
- mode = 00 target = 0.
- So we have nothing to write.
+ mode = 00 target = 0. Nothing to write.
*/
- c->Request.CDBLen = 12;
- c->Request.Type.Type = TYPE_CMD; // It is a command.
- c->Request.Type.Attribute = ATTR_SIMPLE;
- c->Request.Type.Direction = XFER_READ; // Read
- c->Request.Timeout = 0; // Don't time out
- c->Request.CDB[0] = CISS_REPORT_LOG;
- c->Request.CDB[6] = (size >> 24) & 0xFF; //MSB
- c->Request.CDB[7] = (size >> 16) & 0xFF;
- c->Request.CDB[8] = (size >> 8) & 0xFF;
- c->Request.CDB[9] = size & 0xFF;
- break;
- case CCISS_READ_CAPACITY:
- c->Header.LUN.LogDev.VolId=
- hba[ctlr]->drv[log_unit].LunID;
+ c->Request.CDBLen = 12;
+ c->Request.Type.Attribute = ATTR_SIMPLE;
+ c->Request.Type.Direction = XFER_READ;
+ c->Request.Timeout = 0;
+ c->Request.CDB[0] = cmd;
+ c->Request.CDB[6] = (size >> 24) & 0xFF; //MSB
+ c->Request.CDB[7] = (size >> 16) & 0xFF;
+ c->Request.CDB[8] = (size >> 8) & 0xFF;
+ c->Request.CDB[9] = size & 0xFF;
+ break;
+
+ case CCISS_READ_CAPACITY:
+ c->Header.LUN.LogDev.VolId = h->drv[log_unit].LunID;
c->Header.LUN.LogDev.Mode = 1;
c->Request.CDBLen = 10;
- c->Request.Type.Type = TYPE_CMD; // It is a command.
- c->Request.Type.Attribute = ATTR_SIMPLE;
- c->Request.Type.Direction = XFER_READ; // Read
- c->Request.Timeout = 0; // Don't time out
- c->Request.CDB[0] = CCISS_READ_CAPACITY;
+ c->Request.Type.Attribute = ATTR_SIMPLE;
+ c->Request.Type.Direction = XFER_READ;
+ c->Request.Timeout = 0;
+ c->Request.CDB[0] = cmd;
+ break;
+ case CCISS_CACHE_FLUSH:
+ c->Request.CDBLen = 12;
+ c->Request.Type.Attribute = ATTR_SIMPLE;
+ c->Request.Type.Direction = XFER_WRITE;
+ c->Request.Timeout = 0;
+ c->Request.CDB[0] = BMIC_WRITE;
+ c->Request.CDB[6] = BMIC_CACHE_FLUSH;
break;
default:
printk(KERN_WARNING
- "cciss: Unknown Command 0x%c sent attempted\n", cmd);
- cmd_free(h, c, 1);
+ "cciss%d: Unknown Command 0x%c\n", ctlr, cmd);
return(IO_ERROR);
- };
-
- // Fill in the scatter gather information
- if (size > 0 )
- {
- buff_dma_handle.val = (__u64) pci_map_single( h->pdev,
+ }
+ } else if (cmd_type == TYPE_MSG) {
+ switch (cmd) {
+ case 3: /* No-Op message */
+ c->Request.CDBLen = 1;
+ c->Request.Type.Attribute = ATTR_SIMPLE;
+ c->Request.Type.Direction = XFER_WRITE;
+ c->Request.Timeout = 0;
+ c->Request.CDB[0] = cmd;
+ break;
+ default:
+ printk(KERN_WARNING
+ "cciss%d: unknown message type %d\n",
+ ctlr, cmd);
+ return IO_ERROR;
+ }
+ } else {
+ printk(KERN_WARNING
+ "cciss%d: unknown command type %d\n", ctlr, cmd_type);
+ return IO_ERROR;
+ }
+ /* Fill in the scatter gather information */
+ if (size > 0) {
+ buff_dma_handle.val = (__u64) pci_map_single(h->pdev,
buff, size, PCI_DMA_BIDIRECTIONAL);
c->SG[0].Addr.lower = buff_dma_handle.val32.lower;
c->SG[0].Addr.upper = buff_dma_handle.val32.upper;
c->SG[0].Len = size;
- c->SG[0].Ext = 0; // we are not chaining
+ c->SG[0].Ext = 0; /* we are not chaining */
+ }
+ return status;
+}
+static int sendcmd_withirq(__u8 cmd,
+ int ctlr,
+ void *buff,
+ size_t size,
+ unsigned int use_unit_num,
+ unsigned int log_unit,
+ __u8 page_code,
+ int cmd_type)
+{
+ ctlr_info_t *h = hba[ctlr];
+ CommandList_struct *c;
+ u64bit buff_dma_handle;
+ unsigned long flags;
+ int return_status;
+ DECLARE_COMPLETION(wait);
+
+ if ((c = cmd_alloc(h , 0)) == NULL)
+ return -ENOMEM;
+ return_status = fill_cmd(c, cmd, ctlr, buff, size, use_unit_num,
+ log_unit, page_code, NULL, cmd_type);
+ if (return_status != IO_OK) {
+ cmd_free(h, c, 0);
+ return return_status;
}
+resend_cmd2:
c->waiting = &wait;
/* Put the request on the tail of the queue and send it */
@@ -934,10 +1141,6 @@
wait_for_completion(&wait);
- /* unlock the buffers from DMA */
- pci_unmap_single( h->pdev, (dma_addr_t) buff_dma_handle.val,
- size, PCI_DMA_BIDIRECTIONAL);
-
if(c->err_info->CommandStatus != 0)
{ /* an error has occurred */
switch(c->err_info->CommandStatus)
@@ -989,11 +1192,22 @@
return_status = IO_ERROR;
break;
case CMD_UNSOLICITED_ABORT:
- printk(KERN_WARNING "cciss: cmd %p aborted "
- "do to an unsolicited abort\n", c);
+ printk(KERN_WARNING
+ "cciss%d: unsolicited abort %p\n",
+ ctlr, c);
+ if (c->retry_count < MAX_CMD_RETRIES) {
+ printk(KERN_WARNING
+ "cciss%d: retrying %p\n",
+ ctlr, c);
+ c->retry_count++;
+ /* erase the old error information */
+ memset(c->err_info, 0,
+ sizeof(ErrorInfo_struct));
+ return_status = IO_OK;
+ INIT_COMPLETION(wait);
+ goto resend_cmd2;
+ }
return_status = IO_ERROR;
-
-
break;
default:
printk(KERN_WARNING "cciss: cmd %p returned "
@@ -1002,6 +1216,9 @@
return_status = IO_ERROR;
}
}
+ /* unlock the buffers from DMA */
+ pci_unmap_single( h->pdev, (dma_addr_t) buff_dma_handle.val,
+ size, PCI_DMA_BIDIRECTIONAL);
cmd_free(h, c, 0);
return(return_status);
@@ -1015,10 +1232,10 @@
memset(inq_buff, 0, sizeof(InquiryData_struct));
if (withirq)
return_code = sendcmd_withirq(CISS_INQUIRY, ctlr,
- inq_buff, sizeof(*inq_buff), 1, logvol ,0xC1);
+ inq_buff, sizeof(*inq_buff), 1, logvol ,0xC1, TYPE_CMD);
else
return_code = sendcmd(CISS_INQUIRY, ctlr, inq_buff,
- sizeof(*inq_buff), 1, logvol ,0xC1, NULL);
+ sizeof(*inq_buff), 1, logvol ,0xC1, NULL, TYPE_CMD);
if (return_code == IO_OK) {
if(inq_buff->data_byte[8] == 0xFF) {
printk(KERN_WARNING
@@ -1057,10 +1274,10 @@
memset(buf, 0, sizeof(*buf));
if (withirq)
return_code = sendcmd_withirq(CCISS_READ_CAPACITY,
- ctlr, buf, sizeof(*buf), 1, logvol, 0 );
+ ctlr, buf, sizeof(*buf), 1, logvol, 0, TYPE_CMD);
else
return_code = sendcmd(CCISS_READ_CAPACITY,
- ctlr, buf, sizeof(*buf), 1, logvol, 0, NULL );
+ ctlr, buf, sizeof(*buf), 1, logvol, 0, NULL, TYPE_CMD);
if (return_code == IO_OK) {
*total_size = be32_to_cpu(*((__u32 *) &buf->total_size[0]))+1;
*block_size = be32_to_cpu(*((__u32 *) &buf->block_size[0]));
@@ -1111,7 +1328,7 @@
goto mem_msg;
return_code = sendcmd_withirq(CISS_REPORT_LOG, ctlr, ld_buff,
- sizeof(ReportLunData_struct), 0, 0, 0 );
+ sizeof(ReportLunData_struct), 0, 0, 0, TYPE_CMD);
if( return_code == IO_OK)
{
@@ -1265,126 +1482,27 @@
2: periph device address is scsi3addr */
unsigned int log_unit,
__u8 page_code,
- unsigned char *scsi3addr)
+ unsigned char *scsi3addr,
+ int cmd_type)
{
CommandList_struct *c;
int i;
unsigned long complete;
ctlr_info_t *info_p= hba[ctlr];
u64bit buff_dma_handle;
+ int status;
- c = cmd_alloc(info_p, 1);
- if (c == NULL)
- {
+ if ((c = cmd_alloc(info_p, 1)) == NULL) {
printk(KERN_WARNING "cciss: unable to get memory");
return(IO_ERROR);
}
- // Fill in Command Header
- c->Header.ReplyQueue = 0; // unused in simple mode
- if( buff != NULL) // buffer to fill
- {
- c->Header.SGList = 1;
- c->Header.SGTotal= 1;
- } else // no buffers to fill
- {
- c->Header.SGList = 0;
- c->Header.SGTotal= 0;
- }
- c->Header.Tag.lower = c->busaddr; // use the kernel address the cmd block for tag
- // Fill in Request block
- switch(cmd)
- {
- case CISS_INQUIRY:
- /* If the logical unit number is 0 then, this is going
- to controller so It's a physical command
- mode = 0 target = 0.
- So we have nothing to write.
- otherwise, if use_unit_num == 1,
- mode = 1(volume set addressing) target = LUNID
- otherwise, if use_unit_num == 2,
- mode = 0(periph dev addr) target = scsi3addr
- */
- if(use_unit_num == 1)
- {
- c->Header.LUN.LogDev.VolId=
- hba[ctlr]->drv[log_unit].LunID;
- c->Header.LUN.LogDev.Mode = 1;
- }
- else if (use_unit_num == 2)
- {
- memcpy(c->Header.LUN.LunAddrBytes,scsi3addr,8);
- c->Header.LUN.LogDev.Mode = 0; // phys dev addr
- }
-
- /* are we trying to read a vital product page */
- if(page_code != 0)
- {
- c->Request.CDB[1] = 0x01;
- c->Request.CDB[2] = page_code;
- }
- c->Request.CDBLen = 6;
- c->Request.Type.Type = TYPE_CMD; // It is a command.
- c->Request.Type.Attribute = ATTR_SIMPLE;
- c->Request.Type.Direction = XFER_READ; // Read
- c->Request.Timeout = 0; // Don't time out
- c->Request.CDB[0] = CISS_INQUIRY;
- c->Request.CDB[4] = size & 0xFF;
- break;
- case CISS_REPORT_LOG:
- case CISS_REPORT_PHYS:
- /* Talking to controller so It's a physical command
- mode = 00 target = 0.
- So we have nothing to write.
- */
- c->Request.CDBLen = 12;
- c->Request.Type.Type = TYPE_CMD; // It is a command.
- c->Request.Type.Attribute = ATTR_SIMPLE;
- c->Request.Type.Direction = XFER_READ; // Read
- c->Request.Timeout = 0; // Don't time out
- c->Request.CDB[0] = cmd;
- c->Request.CDB[6] = (size >> 24) & 0xFF; //MSB
- c->Request.CDB[7] = (size >> 16) & 0xFF;
- c->Request.CDB[8] = (size >> 8) & 0xFF;
- c->Request.CDB[9] = size & 0xFF;
- break;
-
- case CCISS_READ_CAPACITY:
- c->Header.LUN.LogDev.VolId=
- hba[ctlr]->drv[log_unit].LunID;
- c->Header.LUN.LogDev.Mode = 1;
- c->Request.CDBLen = 10;
- c->Request.Type.Type = TYPE_CMD; // It is a command.
- c->Request.Type.Attribute = ATTR_SIMPLE;
- c->Request.Type.Direction = XFER_READ; // Read
- c->Request.Timeout = 0; // Don't time out
- c->Request.CDB[0] = CCISS_READ_CAPACITY;
- break;
- case CCISS_CACHE_FLUSH:
- c->Request.CDBLen = 12;
- c->Request.Type.Type = TYPE_CMD; // It is a command.
- c->Request.Type.Attribute = ATTR_SIMPLE;
- c->Request.Type.Direction = XFER_WRITE; // No data
- c->Request.Timeout = 0; // Don't time out
- c->Request.CDB[0] = BMIC_WRITE; // BMIC Passthru
- c->Request.CDB[6] = BMIC_CACHE_FLUSH;
- break;
- default:
- printk(KERN_WARNING
- "cciss: Unknown Command 0x%c sent attempted\n",
- cmd);
- cmd_free(info_p, c, 1);
- return(IO_ERROR);
- };
- // Fill in the scatter gather information
- if (size > 0 )
- {
- buff_dma_handle.val = (__u64) pci_map_single( info_p->pdev,
- buff, size, PCI_DMA_BIDIRECTIONAL);
- c->SG[0].Addr.lower = buff_dma_handle.val32.lower;
- c->SG[0].Addr.upper = buff_dma_handle.val32.upper;
- c->SG[0].Len = size;
- c->SG[0].Ext = 0; // we are not chaining
+ status = fill_cmd(c, cmd, ctlr, buff, size, use_unit_num,
+ log_unit, page_code, scsi3addr, cmd_type);
+ if (status != IO_OK) {
+ cmd_free(info_p, c, 1);
+ return status;
}
+resend_cmd1:
/*
* Disable interrupt
*/
@@ -1417,9 +1535,6 @@
printk(KERN_DEBUG "cciss: command completed\n");
#endif /* CCISS_DEBUG */
- /* unlock the data buffer from DMA */
- pci_unmap_single(info_p->pdev, (dma_addr_t) buff_dma_handle.val,
- size, PCI_DMA_BIDIRECTIONAL);
if (complete != 1) {
if ( (complete & CISS_ERROR_BIT)
&& (complete & ~CISS_ERROR_BIT) == c->busaddr)
@@ -1437,8 +1552,30 @@
))
{
complete = c->busaddr;
- } else
- {
+ } else {
+ if (c->err_info->CommandStatus ==
+ CMD_UNSOLICITED_ABORT) {
+ printk(KERN_WARNING "cciss%d: "
+ "unsolicited abort %p\n",
+ ctlr, c);
+ if (c->retry_count < MAX_CMD_RETRIES) {
+ printk(KERN_WARNING
+ "cciss%d: retrying %p\n",
+ ctlr, c);
+ c->retry_count++;
+ /* erase the old error */
+ /* information */
+ memset(c->err_info, 0,
+ sizeof(ErrorInfo_struct));
+ goto resend_cmd1;
+ } else {
+ printk(KERN_WARNING
+ "cciss%d: retried %p too "
+ "many times\n", ctlr, c);
+ status = IO_ERROR;
+ goto cleanup1;
+ }
+ }
printk(KERN_WARNING "ciss ciss%d: sendcmd"
" Error %x \n", ctlr,
c->err_info->CommandStatus);
@@ -1448,27 +1585,31 @@
c->err_info->MoreErrInfo.Invalid_Cmd.offense_size,
c->err_info->MoreErrInfo.Invalid_Cmd.offense_num,
c->err_info->MoreErrInfo.Invalid_Cmd.offense_value);
- cmd_free(info_p,c, 1);
- return(IO_ERROR);
+ status = IO_ERROR;
+ goto cleanup1;
}
}
if (complete != c->busaddr) {
printk( KERN_WARNING "cciss cciss%d: SendCmd "
"Invalid command list address returned! (%lx)\n",
ctlr, complete);
- cmd_free(info_p, c, 1);
- return (IO_ERROR);
+ status = IO_ERROR;
+ goto cleanup1;
}
} else {
printk( KERN_WARNING
"cciss cciss%d: SendCmd Timeout out, "
"No command list address returned!\n",
ctlr);
- cmd_free(info_p, c, 1);
- return (IO_ERROR);
+ status = IO_ERROR;
}
+
+cleanup1:
+ /* unlock the data buffer from DMA */
+ pci_unmap_single(info_p->pdev, (dma_addr_t) buff_dma_handle.val,
+ size, PCI_DMA_BIDIRECTIONAL);
cmd_free(info_p, c, 1);
- return (IO_OK);
+ return (status);
}
/*
* Map (physical) PCI mem into (virtual) kernel space
@@ -1552,27 +1693,35 @@
}
}
+/* Assumes that CCISS_LOCK(h->ctlr) is held. */
+/* Zeros out the error record and then resends the command back */
+/* to the controller */
+static inline void resend_cciss_cmd( ctlr_info_t *h, CommandList_struct *c)
+{
+ /* erase the old error information */
+ memset(c->err_info, 0, sizeof(ErrorInfo_struct));
+
+ /* add it to software queue and then send it to the controller */
+ addQ(&(h->reqQ),c);
+ h->Qdepth++;
+ if(h->Qdepth > h->maxQsinceinit)
+ h->maxQsinceinit = h->Qdepth;
+
+ start_io(h);
+}
/* checks the status of the job and calls complete buffers to mark all
* buffers for the completed job.
*/
-static inline void complete_command( CommandList_struct *cmd, int timeout)
+static inline void complete_command( ctlr_info_t *h, CommandList_struct *cmd,
+ int timeout)
{
int status = 1;
int i;
+ int retry_cmd = 0;
u64bit temp64;
if (timeout)
status = 0;
- /* unmap the DMA mapping for all the scatter gather elements */
- for(i=0; iHeader.SGList; i++)
- {
- temp64.val32.lower = cmd->SG[i].Addr.lower;
- temp64.val32.upper = cmd->SG[i].Addr.upper;
- pci_unmap_page(hba[cmd->ctlr]->pdev,
- temp64.val, cmd->SG[i].Len,
- (cmd->Request.Type.Direction == XFER_READ) ?
- PCI_DMA_FROMDEVICE : PCI_DMA_TODEVICE);
- }
if(cmd->err_info->CommandStatus != 0)
{ /* an error has occurred */
@@ -1646,8 +1795,18 @@
status=0;
break;
case CMD_UNSOLICITED_ABORT:
- printk(KERN_WARNING "cciss: cmd %p aborted "
- "do to an unsolicited abort\n", cmd);
+ printk(KERN_WARNING "cciss%d: unsolicited "
+ "abort %p\n", h->ctlr, cmd);
+ if (cmd->retry_count < MAX_CMD_RETRIES) {
+ retry_cmd=1;
+ printk(KERN_WARNING
+ "cciss%d: retrying %p\n",
+ h->ctlr, cmd);
+ cmd->retry_count++;
+ } else
+ printk(KERN_WARNING
+ "cciss%d: %p retried too "
+ "many times\n", h->ctlr, cmd);
status=0;
break;
case CMD_TIMEOUT:
@@ -1662,7 +1821,21 @@
status=0;
}
}
-
+ /* We need to return this command */
+ if(retry_cmd) {
+ resend_cciss_cmd(h,cmd);
+ return;
+ }
+ /* command did not need to be retried */
+ /* unmap the DMA mapping for all the scatter gather elements */
+ for(i=0; iHeader.SGList; i++) {
+ temp64.val32.lower = cmd->SG[i].Addr.lower;
+ temp64.val32.upper = cmd->SG[i].Addr.upper;
+ pci_unmap_page(hba[cmd->ctlr]->pdev,
+ temp64.val, cmd->SG[i].Len,
+ (cmd->Request.Type.Direction == XFER_READ) ?
+ PCI_DMA_FROMDEVICE : PCI_DMA_TODEVICE);
+ }
complete_buffers(cmd->rq->bio, status);
#ifdef CCISS_DEBUG
@@ -1670,6 +1843,7 @@
#endif /* CCISS_DEBUG */
end_that_request_last(cmd->rq);
+ cmd_free(h,cmd,1);
}
/*
@@ -1816,8 +1990,7 @@
if (c->busaddr == a) {
removeQ(&h->cmpQ, c);
if (c->cmd_type == CMD_RWREQ) {
- complete_command(c, 0);
- cmd_free(h, c, 1);
+ complete_command(h, c, 0);
} else if (c->cmd_type == CMD_IOCTL_PEND) {
complete(c->waiting);
}
@@ -2038,12 +2211,15 @@
&(c->cfgtable->HostWrite.TransportRequest));
writel( CFGTBL_ChangeReq, c->vaddr + SA5_DOORBELL);
- for(i=0;ivaddr + SA5_DOORBELL) & CFGTBL_ChangeReq))
break;
/* delay and try again */
- udelay(1000);
+ set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout(10);
}
#ifdef CCISS_DEBUG
@@ -2102,7 +2278,7 @@
}
/* Get the firmware version */
return_code = sendcmd(CISS_INQUIRY, cntl_num, inq_buff,
- sizeof(InquiryData_struct), 0, 0 ,0, NULL );
+ sizeof(InquiryData_struct), 0, 0 ,0, NULL, TYPE_CMD);
if (return_code == IO_OK)
{
hba[cntl_num]->firm_ver[0] = inq_buff->data_byte[32];
@@ -2116,7 +2292,7 @@
}
/* Get the number of logical volumes */
return_code = sendcmd(CISS_REPORT_LOG, cntl_num, ld_buff,
- sizeof(ReportLunData_struct), 0, 0, 0, NULL );
+ sizeof(ReportLunData_struct), 0, 0, 0, NULL, TYPE_CMD);
if( return_code == IO_OK)
{
@@ -2390,7 +2566,8 @@
/* sendcmd will turn off interrupt, and send the flush...
* To write all data in the battery backed cache to disks */
memset(flush_buf, 0, 4);
- return_code = sendcmd(CCISS_CACHE_FLUSH, i, flush_buf, 4, 0, 0, 0, NULL);
+ return_code = sendcmd(CCISS_CACHE_FLUSH, i, flush_buf, 4, 0, 0, 0, NULL,
+ TYPE_CMD);
if(return_code != IO_OK)
{
printk(KERN_WARNING "Error Flushing cache on controller %d\n",
diff -Nru a/drivers/block/cciss_cmd.h b/drivers/block/cciss_cmd.h
--- a/drivers/block/cciss_cmd.h Tue Mar 4 19:30:05 2003
+++ b/drivers/block/cciss_cmd.h Tue Mar 4 19:30:05 2003
@@ -240,6 +240,7 @@
struct _CommandList_struct *next;
struct request * rq;
struct completion *waiting;
+ int retry_count;
#ifdef CONFIG_CISS_SCSI_TAPE
void * scsi_cmd;
#endif
diff -Nru a/drivers/block/cciss_scsi.c b/drivers/block/cciss_scsi.c
--- a/drivers/block/cciss_scsi.c Tue Mar 4 19:30:05 2003
+++ b/drivers/block/cciss_scsi.c Tue Mar 4 19:30:05 2003
@@ -47,7 +47,8 @@
2: address is in scsi3addr */
unsigned int log_unit,
__u8 page_code,
- unsigned char *scsi3addr );
+ unsigned char *scsi3addr,
+ int cmd_type);
int __init cciss_scsi_detect(Scsi_Host_Template *tpnt);
@@ -210,7 +211,7 @@
stk = &sa->cmd_stack;
size = sizeof(struct cciss_scsi_cmd_stack_elem_t) * CMD_STACK_SIZE;
- // pci_alloc_consistent guarentees 32-bit DMA address will
+ // pci_alloc_consistent guarantees 32-bit DMA address will
// be used
stk->pool = (struct cciss_scsi_cmd_stack_elem_t *)
diff -Nru a/drivers/block/cpqarray.c b/drivers/block/cpqarray.c
--- a/drivers/block/cpqarray.c Tue Mar 4 19:30:04 2003
+++ b/drivers/block/cpqarray.c Tue Mar 4 19:30:04 2003
@@ -715,7 +715,7 @@
return -ENXIO;
/*
- * Root is allowed to open raw volume zero even if its not configured
+ * Root is allowed to open raw volume zero even if it's not configured
* so array config can still work. I don't think I really like this,
* but I'm already using way to many device nodes to claim another one
* for "raw controller".
diff -Nru a/drivers/block/deadline-iosched.c b/drivers/block/deadline-iosched.c
--- a/drivers/block/deadline-iosched.c Tue Mar 4 19:30:12 2003
+++ b/drivers/block/deadline-iosched.c Tue Mar 4 19:30:12 2003
@@ -98,7 +98,7 @@
unsigned long expires;
};
-static inline void deadline_move_to_dispatch(struct deadline_data *dd, struct deadline_rq *drq);
+static void deadline_move_request(struct deadline_data *dd, struct deadline_rq *drq);
static kmem_cache_t *drq_pool;
@@ -205,7 +205,7 @@
return;
}
- deadline_move_to_dispatch(dd, __alias);
+ deadline_move_request(dd, __alias);
goto retry;
}
diff -Nru a/drivers/block/genhd.c b/drivers/block/genhd.c
--- a/drivers/block/genhd.c Tue Mar 4 19:30:08 2003
+++ b/drivers/block/genhd.c Tue Mar 4 19:30:08 2003
@@ -1,17 +1,5 @@
/*
- * Code extracted from
- * linux/kernel/hd.c
- *
- * Copyright (C) 1991-1998 Linus Torvalds
- *
- * devfs support - jj, rgooch, 980122
- *
- * Moved partition checking code to fs/partitions* - Russell King
- * (linux@arm.uk.linux.org)
- */
-
-/*
- * TODO: rip out the remaining init crap from this file --hch
+ * gendisk handling
*/
#include
@@ -29,8 +17,9 @@
static struct subsystem block_subsys;
+#define MAX_PROBE_HASH 23 /* random */
-struct blk_probe {
+static struct blk_probe {
struct blk_probe *next;
dev_t dev;
unsigned long range;
@@ -38,21 +27,27 @@
struct gendisk *(*get)(dev_t dev, int *part, void *data);
int (*lock)(dev_t, void *);
void *data;
-} *probes[MAX_BLKDEV];
+} *probes[MAX_PROBE_HASH];
-/* index in the above */
+/* index in the above - for now: assume no multimajor ranges */
static inline int dev_to_index(dev_t dev)
{
- return MAJOR(dev);
+ return MAJOR(dev) % MAX_PROBE_HASH;
}
+/*
+ * Register device numbers dev..(dev+range-1)
+ * range must be nonzero
+ * The hash chain is sorted on range, so that subranges can override.
+ */
void blk_register_region(dev_t dev, unsigned long range, struct module *module,
- struct gendisk *(*probe)(dev_t, int *, void *),
- int (*lock)(dev_t, void *), void *data)
+ struct gendisk *(*probe)(dev_t, int *, void *),
+ int (*lock)(dev_t, void *), void *data)
{
int index = dev_to_index(dev);
struct blk_probe *p = kmalloc(sizeof(struct blk_probe), GFP_KERNEL);
struct blk_probe **s;
+
p->owner = module;
p->get = probe;
p->lock = lock;
@@ -71,6 +66,7 @@
{
int index = dev_to_index(dev);
struct blk_probe **s;
+
down_write(&block_subsys.rwsem);
for (s = &probes[index]; *s; s = &(*s)->next) {
struct blk_probe *p = *s;
@@ -94,6 +90,7 @@
static int exact_lock(dev_t dev, void *data)
{
struct gendisk *p = data;
+
if (!get_disk(p))
return -1;
return 0;
@@ -109,14 +106,14 @@
void add_disk(struct gendisk *disk)
{
disk->flags |= GENHD_FL_UP;
- blk_register_region(MKDEV(disk->major, disk->first_minor), disk->minors,
- NULL, exact_match, exact_lock, disk);
+ blk_register_region(MKDEV(disk->major, disk->first_minor),
+ disk->minors, NULL, exact_match, exact_lock, disk);
register_disk(disk);
elv_register_queue(disk);
}
EXPORT_SYMBOL(add_disk);
-EXPORT_SYMBOL(del_gendisk);
+EXPORT_SYMBOL(del_gendisk); /* in partitions/check.c */
void unlink_gendisk(struct gendisk *disk)
{
@@ -146,18 +143,17 @@
struct gendisk *(*probe)(dev_t, int *, void *);
struct module *owner;
void *data;
- if (p->dev > dev || p->dev + p->range <= dev)
+
+ if (p->dev > dev || p->dev + p->range - 1 < dev)
continue;
- if (p->range >= best) {
- up_read(&block_subsys.rwsem);
- return NULL;
- }
+ if (p->range - 1 >= best)
+ break;
if (!try_module_get(p->owner))
continue;
owner = p->owner;
data = p->data;
probe = p->get;
- best = p->range;
+ best = p->range - 1;
*part = dev - p->dev;
if (p->lock && p->lock(dev, data) < 0) {
module_put(owner);
@@ -169,7 +165,7 @@
module_put(owner);
if (disk)
return disk;
- goto retry;
+ goto retry; /* this terminates: best decreases */
}
up_read(&block_subsys.rwsem);
return NULL;
@@ -245,7 +241,7 @@
static struct gendisk *base_probe(dev_t dev, int *part, void *data)
{
- char name[20];
+ char name[30];
sprintf(name, "block-major-%d", MAJOR(dev));
request_module(name);
return NULL;
@@ -256,11 +252,11 @@
struct blk_probe *base = kmalloc(sizeof(struct blk_probe), GFP_KERNEL);
int i;
memset(base, 0, sizeof(struct blk_probe));
- base->dev = MKDEV(1,0);
- base->range = MKDEV(MAX_BLKDEV-1, 255) - base->dev + 1;
+ base->dev = 1;
+ base->range = ~0; /* range 1 .. ~0 */
base->get = base_probe;
- for (i = 1; i < MAX_BLKDEV; i++)
- probes[i] = base;
+ for (i = 0; i < MAX_PROBE_HASH; i++)
+ probes[i] = base; /* must remain last in chain */
blk_dev_init();
subsystem_register(&block_subsys);
return 0;
@@ -281,12 +277,14 @@
ssize_t (*show)(struct gendisk *, char *);
};
-static ssize_t disk_attr_show(struct kobject * kobj, struct attribute * attr,
- char * page)
+static ssize_t disk_attr_show(struct kobject *kobj, struct attribute *attr,
+ char *page)
{
- struct gendisk * disk = to_disk(kobj);
- struct disk_attribute * disk_attr = container_of(attr,struct disk_attribute,attr);
+ struct gendisk *disk = to_disk(kobj);
+ struct disk_attribute *disk_attr =
+ container_of(attr,struct disk_attribute,attr);
ssize_t ret = 0;
+
if (disk_attr->show)
ret = disk_attr->show(disk,page);
return ret;
@@ -303,11 +301,11 @@
}
static ssize_t disk_range_read(struct gendisk * disk, char *page)
{
- return sprintf(page, "%d\n",disk->minors);
+ return sprintf(page, "%d\n", disk->minors);
}
static ssize_t disk_size_read(struct gendisk * disk, char *page)
{
- return sprintf(page, "%llu\n",(unsigned long long)get_capacity(disk));
+ return sprintf(page, "%llu\n", (unsigned long long)get_capacity(disk));
}
static inline unsigned jiffies_to_msec(unsigned jif)
diff -Nru a/drivers/block/ll_rw_blk.c b/drivers/block/ll_rw_blk.c
--- a/drivers/block/ll_rw_blk.c Tue Mar 4 19:30:04 2003
+++ b/drivers/block/ll_rw_blk.c Tue Mar 4 19:30:04 2003
@@ -1461,6 +1461,7 @@
if (blk_rq_tagged(rq))
blk_queue_end_tag(q, rq);
+ drive_stat_acct(rq, rq->nr_sectors, 1);
__elv_add_request(q, rq, !at_head, 0);
q->request_fn(q);
spin_unlock_irqrestore(q->queue_lock, flags);
@@ -1892,7 +1893,7 @@
}
/**
- * generic_make_request: hand a buffer to it's device driver for I/O
+ * generic_make_request: hand a buffer to its device driver for I/O
* @bio: The bio describing the location in memory and on the device.
*
* generic_make_request() is used to make I/O requests of block
diff -Nru a/drivers/block/loop.c b/drivers/block/loop.c
--- a/drivers/block/loop.c Tue Mar 4 19:30:10 2003
+++ b/drivers/block/loop.c Tue Mar 4 19:30:10 2003
@@ -447,7 +447,22 @@
goto out_bh;
}
- bio = bio_copy(rbh, GFP_NOIO, rbh->bi_rw & WRITE);
+ /*
+ * When called on the page reclaim -> writepage path, this code can
+ * trivially consume all memory. So we drop PF_MEMALLOC to avoid
+ * stealing all the page reserves and throttle to the writeout rate.
+ * pdflush will have been woken by page reclaim. Let it do its work.
+ */
+ do {
+ int flags = current->flags;
+
+ current->flags &= ~PF_MEMALLOC;
+ bio = bio_copy(rbh, (GFP_ATOMIC & ~__GFP_HIGH) | __GFP_NOWARN,
+ rbh->bi_rw & WRITE);
+ current->flags = flags;
+ if (bio == NULL)
+ blk_congestion_wait(WRITE, HZ/10);
+ } while (bio == NULL);
bio->bi_end_io = loop_end_io_transfer;
bio->bi_private = rbh;
diff -Nru a/drivers/block/scsi_ioctl.c b/drivers/block/scsi_ioctl.c
--- a/drivers/block/scsi_ioctl.c Tue Mar 4 19:30:14 2003
+++ b/drivers/block/scsi_ioctl.c Tue Mar 4 19:30:14 2003
@@ -60,6 +60,7 @@
rq->flags |= REQ_NOMERGE;
rq->waiting = &wait;
+ drive_stat_acct(rq, rq->nr_sectors, 1);
elv_add_request(q, rq, 1, 1);
generic_unplug_device(q);
wait_for_completion(&wait);
diff -Nru a/drivers/cdrom/cdrom.c b/drivers/cdrom/cdrom.c
--- a/drivers/cdrom/cdrom.c Tue Mar 4 19:30:04 2003
+++ b/drivers/cdrom/cdrom.c Tue Mar 4 19:30:04 2003
@@ -172,8 +172,8 @@
-- Defined CD_DVD and CD_CHANGER log levels.
-- Fixed the CDROMREADxxx ioctls.
-- CDROMPLAYTRKIND uses the GPCMD_PLAY_AUDIO_MSF command - too few
- drives supported it. We loose the index part, however.
- -- Small modifications to accomodate opens of /dev/hdc1, required
+ drives supported it. We lose the index part, however.
+ -- Small modifications to accommodate opens of /dev/hdc1, required
for ide-cd to handle multisession discs.
-- Export cdrom_mode_sense and cdrom_mode_select.
-- init_cdrom_command() for setting up a cgc command.
diff -Nru a/drivers/cdrom/sbpcd.c b/drivers/cdrom/sbpcd.c
--- a/drivers/cdrom/sbpcd.c Tue Mar 4 19:30:05 2003
+++ b/drivers/cdrom/sbpcd.c Tue Mar 4 19:30:05 2003
@@ -341,7 +341,7 @@
* Trying to merge requests breaks this driver horribly (as in it goes
* boom and apparently has done so since 2.3.41). As it is a legacy
* driver for a horribly slow double speed CD on a hideous interface
- * designed for polled operation, I won't loose any sleep in simply
+ * designed for polled operation, I won't lose any sleep in simply
* disallowing merging. Paul G. 02/2001
*
* Thu May 30 14:14:47 CEST 2002:
diff -Nru a/drivers/char/Makefile b/drivers/char/Makefile
--- a/drivers/char/Makefile Tue Mar 4 19:30:13 2003
+++ b/drivers/char/Makefile Tue Mar 4 19:30:13 2003
@@ -83,7 +83,7 @@
clean-files := consolemap_deftbl.c defkeymap.c qtronixmap.c
$(obj)/consolemap_deftbl.c: $(src)/$(FONTMAPFILE)
- $(call do_cmd,CONMK $@,$(objtree)/scripts/conmakehash $< > $@)
+ $(call do_cmd,CONMK $@,$(objtree)/scripts/conmakehash $< > $@)
$(obj)/defkeymap.o: $(obj)/defkeymap.c
diff -Nru a/drivers/char/agp/Kconfig b/drivers/char/agp/Kconfig
--- a/drivers/char/agp/Kconfig Tue Mar 4 19:30:10 2003
+++ b/drivers/char/agp/Kconfig Tue Mar 4 19:30:10 2003
@@ -34,14 +34,17 @@
depends on AGP
config AGP_INTEL
- tristate "Intel 440LX/BX/GX and I815/I820/I830M/I830MP/I840/I845/I850/I860 support"
+ tristate "Intel 440LX/BX/GX and I815/I820/830M/I830MP/I840/I845/845G/I850/852GM/855GM/I860/865G support"
depends on AGP
help
This option gives you AGP support for the GLX component of the
- XFree86 4.x on Intel 440LX/BX/GX, 815, 820, 830, 840, 845, 850 and 860 chipsets.
+ XFree86 4.x on Intel 440LX/BX/GX, 815, 820, 830, 840, 845, 850
+ and 860 chipsets and full support for the 810, 815, 830M, 845G,
+ 852GM, 855GM and 865G integrated graphics chipsets.
You should say Y here if you use XFree86 3.3.6 or 4.x and want to
- use GLX or DRI. If unsure, say N.
+ use GLX or DRI, or if you have any Intel integrated graphics
+ chipsets. If unsure, say Y.
#config AGP_I810
# tristate "Intel I810/I815/I830M (on-board) support"
diff -Nru a/drivers/char/agp/agp.h b/drivers/char/agp/agp.h
--- a/drivers/char/agp/agp.h Tue Mar 4 19:30:08 2003
+++ b/drivers/char/agp/agp.h Tue Mar 4 19:30:08 2003
@@ -42,9 +42,8 @@
static void __attribute__((unused)) global_cache_flush(void)
{
- if (smp_call_function(ipi_handler, NULL, 1, 1) != 0)
+ if (on_each_cpu(ipi_handler, NULL, 1, 1) != 0)
panic(PFX "timed out waiting for the other CPUs!\n");
- flush_agp_cache();
}
#else
static inline void global_cache_flush(void)
@@ -216,6 +215,21 @@
/* This one is for I830MP w. an external graphic card */
#define INTEL_I830_ERRSTS 0x92
+
+/* Intel 855GM/852GM registers */
+#define I855_GMCH_GMS_STOLEN_0M 0x0
+#define I855_GMCH_GMS_STOLEN_1M (0x1 << 4)
+#define I855_GMCH_GMS_STOLEN_4M (0x2 << 4)
+#define I855_GMCH_GMS_STOLEN_8M (0x3 << 4)
+#define I855_GMCH_GMS_STOLEN_16M (0x4 << 4)
+#define I855_GMCH_GMS_STOLEN_32M (0x5 << 4)
+#define I85X_CAPID 0x44
+#define I85X_VARIANT_MASK 0x7
+#define I85X_VARIANT_SHIFT 5
+#define I855_GME 0x0
+#define I855_GM 0x4
+#define I852_GME 0x2
+#define I852_GM 0x5
/* intel 815 register */
#define INTEL_815_APCONT 0x51
diff -Nru a/drivers/char/agp/alpha-agp.c b/drivers/char/agp/alpha-agp.c
--- a/drivers/char/agp/alpha-agp.c Tue Mar 4 19:30:11 2003
+++ b/drivers/char/agp/alpha-agp.c Tue Mar 4 19:30:11 2003
@@ -185,7 +185,7 @@
agp_bridge->agp_destroy_page = agp_generic_destroy_page;
agp_bridge->mode = agp->capability.lw;
agp_bridge->cant_use_aperture = 1;
- agp_bridgevm_ops = &alpha_core_agp_vm_ops;
+ agp_bridge->vm_ops = &alpha_core_agp_vm_ops;
alpha_core_agp_driver.dev = agp_bridge->dev;
agp_register_driver(&alpha_core_agp_driver);
diff -Nru a/drivers/char/agp/intel-agp.c b/drivers/char/agp/intel-agp.c
--- a/drivers/char/agp/intel-agp.c Tue Mar 4 19:30:08 2003
+++ b/drivers/char/agp/intel-agp.c Tue Mar 4 19:30:08 2003
@@ -2,6 +2,11 @@
* Intel AGPGART routines.
*/
+/*
+ * Intel(R) 855GM/852GM and 865G support added by David Dawes
+ * .
+ */
+
#include
#include
#include
@@ -294,34 +299,62 @@
u16 gmch_ctrl;
int gtt_entries;
u8 rdct;
+ int local = 0;
static const int ddt[4] = { 0, 16, 32, 64 };
pci_read_config_word(agp_bridge->dev,I830_GMCH_CTRL,&gmch_ctrl);
- switch (gmch_ctrl & I830_GMCH_GMS_MASK) {
- case I830_GMCH_GMS_STOLEN_512:
- gtt_entries = KB(512) - KB(132);
- printk(KERN_INFO PFX "detected %dK stolen memory.\n",gtt_entries / KB(1));
- break;
- case I830_GMCH_GMS_STOLEN_1024:
- gtt_entries = MB(1) - KB(132);
- printk(KERN_INFO PFX "detected %dK stolen memory.\n",gtt_entries / KB(1));
- break;
- case I830_GMCH_GMS_STOLEN_8192:
- gtt_entries = MB(8) - KB(132);
- printk(KERN_INFO PFX "detected %dK stolen memory.\n",gtt_entries / KB(1));
- break;
- case I830_GMCH_GMS_LOCAL:
- rdct = INREG8(intel_i830_private.registers,I830_RDRAM_CHANNEL_TYPE);
- gtt_entries = (I830_RDRAM_ND(rdct) + 1) * MB(ddt[I830_RDRAM_DDT(rdct)]);
- printk(KERN_INFO PFX "detected %dK local memory.\n",gtt_entries / KB(1));
- break;
- default:
- printk(KERN_INFO PFX "no video memory detected.\n");
- gtt_entries = 0;
- break;
+ if (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82830_HB ||
+ agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82845G_HB) {
+ switch (gmch_ctrl & I830_GMCH_GMS_MASK) {
+ case I830_GMCH_GMS_STOLEN_512:
+ gtt_entries = KB(512) - KB(132);
+ break;
+ case I830_GMCH_GMS_STOLEN_1024:
+ gtt_entries = MB(1) - KB(132);
+ break;
+ case I830_GMCH_GMS_STOLEN_8192:
+ gtt_entries = MB(8) - KB(132);
+ break;
+ case I830_GMCH_GMS_LOCAL:
+ rdct = INREG8(intel_i830_private.registers,
+ I830_RDRAM_CHANNEL_TYPE);
+ gtt_entries = (I830_RDRAM_ND(rdct) + 1) *
+ MB(ddt[I830_RDRAM_DDT(rdct)]);
+ local = 1;
+ break;
+ default:
+ gtt_entries = 0;
+ break;
+ }
+ } else {
+ switch (gmch_ctrl & I830_GMCH_GMS_MASK) {
+ case I855_GMCH_GMS_STOLEN_1M:
+ gtt_entries = MB(1) - KB(132);
+ break;
+ case I855_GMCH_GMS_STOLEN_4M:
+ gtt_entries = MB(4) - KB(132);
+ break;
+ case I855_GMCH_GMS_STOLEN_8M:
+ gtt_entries = MB(8) - KB(132);
+ break;
+ case I855_GMCH_GMS_STOLEN_16M:
+ gtt_entries = MB(16) - KB(132);
+ break;
+ case I855_GMCH_GMS_STOLEN_32M:
+ gtt_entries = MB(32) - KB(132);
+ break;
+ default:
+ gtt_entries = 0;
+ break;
+ }
}
-
+ if (gtt_entries > 0)
+ printk(KERN_INFO PFX "Detected %dK %s memory.\n",
+ gtt_entries / KB(1), local ? "local" : "stolen");
+ else
+ printk(KERN_INFO PFX
+ "No pre-allocated video memory detected.\n");
gtt_entries /= KB(4);
intel_i830_private.gtt_entries = gtt_entries;
@@ -374,9 +407,18 @@
u16 gmch_ctrl;
struct aper_size_info_fixed *values;
- pci_read_config_word(agp_bridge->dev,I830_GMCH_CTRL,&gmch_ctrl);
values = A_SIZE_FIX(agp_bridge->aperture_sizes);
+ if (agp_bridge->dev->device != PCI_DEVICE_ID_INTEL_82830_HB &&
+ agp_bridge->dev->device != PCI_DEVICE_ID_INTEL_82845G_HB) {
+ /* 855GM/852GM/865G has 128MB aperture size */
+ agp_bridge->previous_size = agp_bridge->current_size = (void *) values;
+ agp_bridge->aperture_size_idx = 0;
+ return(values[0].size);
+ }
+
+ pci_read_config_word(agp_bridge->dev,I830_GMCH_CTRL,&gmch_ctrl);
+
if ((gmch_ctrl & I830_GMCH_MEM_MASK) == I830_GMCH_MEM_128M) {
agp_bridge->previous_size = agp_bridge->current_size = (void *) values;
agp_bridge->aperture_size_idx = 0;
@@ -558,6 +600,7 @@
return(0);
}
+
static int intel_fetch_size(void)
{
int i;
@@ -1241,7 +1284,7 @@
{
.device_id = PCI_DEVICE_ID_INTEL_82830_HB,
.chipset = INTEL_I830_M,
- .chipset_name = "i830M",
+ .chipset_name = "830M",
.chipset_setup = intel_830mp_setup
},
{
@@ -1259,7 +1302,7 @@
{
.device_id = PCI_DEVICE_ID_INTEL_82845G_HB,
.chipset = INTEL_I845_G,
- .chipset_name = "i845G",
+ .chipset_name = "845G",
.chipset_setup = intel_845_setup
},
{
@@ -1269,11 +1312,23 @@
.chipset_setup = intel_850_setup
},
{
+ .device_id = PCI_DEVICE_ID_INTEL_82855_HB,
+ .chipset = INTEL_I855_PM,
+ .chipset_name = "855PM",
+ .chipset_setup = intel_845_setup
+ },
+ {
.device_id = PCI_DEVICE_ID_INTEL_82860_HB,
.chipset = INTEL_I860,
.chipset_name = "i860",
.chipset_setup = intel_860_setup
},
+ {
+ .device_id = PCI_DEVICE_ID_INTEL_82865_HB,
+ .chipset = INTEL_I865_G,
+ .chipset_name = "865G",
+ .chipset_setup = intel_845_setup
+ },
{ }, /* dummy final entry, always present */
};
@@ -1387,13 +1442,13 @@
if (i810_dev == NULL) {
/*
- * We probably have a I845MP chipset with an external graphics
+ * We probably have a I845G chipset with an external graphics
* card. It will be initialized later
*/
agp_bridge->type = INTEL_I845_G;
break;
}
- printk(KERN_INFO PFX "Detected an Intel 845G Chipset.\n");
+ printk(KERN_INFO PFX "Detected an Intel(R) 845G Chipset.\n");
agp_bridge->type = INTEL_I810;
return intel_i830_setup(i810_dev);
@@ -1408,7 +1463,63 @@
agp_bridge->type = INTEL_I830_M;
break;
}
- printk(KERN_INFO PFX "Detected an Intel 830M Chipset.\n");
+ printk(KERN_INFO PFX "Detected an Intel(R) 830M Chipset.\n");
+ agp_bridge->type = INTEL_I810;
+ return intel_i830_setup(i810_dev);
+
+ case PCI_DEVICE_ID_INTEL_82855_HB:
+ i810_dev = pci_find_device(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82855_IG, NULL);
+ if(i810_dev && PCI_FUNC(i810_dev->devfn) != 0)
+ i810_dev = pci_find_device(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82855_IG, i810_dev);
+
+ if (i810_dev == NULL) {
+ /* Intel 855PM with external graphic card */
+ /* It will be initialized later */
+ agp_bridge->type = INTEL_I855_PM;
+ break;
+ }
+ {
+ u32 capval = 0;
+ const char *name = "855GM/852GM";
+ pci_read_config_dword(dev, I85X_CAPID, &capval);
+ switch ((capval >> I85X_VARIANT_SHIFT) &
+ I85X_VARIANT_MASK) {
+ case I855_GME:
+ name = "855GME";
+ break;
+ case I855_GM:
+ name = "855GM";
+ break;
+ case I852_GME:
+ name = "852GME";
+ break;
+ case I852_GM:
+ name = "852GM";
+ break;
+ }
+ printk(KERN_INFO PFX
+ "Detected an Intel(R) %s Chipset.\n", name);
+ }
+ agp_bridge->type = INTEL_I810;
+ return intel_i830_setup(i810_dev);
+
+ case PCI_DEVICE_ID_INTEL_82865_HB:
+ i810_dev = pci_find_device(PCI_VENDOR_ID_INTEL,
+ PCI_DEVICE_ID_INTEL_82865_IG, NULL);
+ if (i810_dev && PCI_FUNC(i810_dev->devfn) != 0) {
+ i810_dev = pci_find_device(PCI_VENDOR_ID_INTEL,
+ PCI_DEVICE_ID_INTEL_82865_IG, i810_dev);
+ }
+
+ if (i810_dev == NULL) {
+ /*
+ * We probably have a 865G chipset with an external graphics
+ * card. It will be initialized later
+ */
+ agp_bridge->type = INTEL_I865_G;
+ break;
+ }
+ printk(KERN_INFO PFX "Detected an Intel(R) 865G Chipset.\n");
agp_bridge->type = INTEL_I810;
return intel_i830_setup(i810_dev);
diff -Nru a/drivers/char/cd1865.h b/drivers/char/cd1865.h
--- a/drivers/char/cd1865.h Tue Mar 4 19:30:10 2003
+++ b/drivers/char/cd1865.h Tue Mar 4 19:30:10 2003
@@ -54,9 +54,9 @@
#define CD186x_RCSR 0x7a /* Receiver Character Status Register */
#define CD186x_TDR 0x7b /* Transmit Data Register */
#define CD186x_EOIR 0x7f /* End of Interrupt Register */
-#define CD186x_MRAR 0x75 /* Modem Request Acknowlege register */
-#define CD186x_TRAR 0x76 /* Transmit Request Acknowlege register */
-#define CD186x_RRAR 0x77 /* Receive Request Acknowlege register */
+#define CD186x_MRAR 0x75 /* Modem Request Acknowledge register */
+#define CD186x_TRAR 0x76 /* Transmit Request Acknowledge register */
+#define CD186x_RRAR 0x77 /* Receive Request Acknowledge register */
#define CD186x_SRCR 0x66 /* Service Request Configuration register */
/* Channel Registers */
diff -Nru a/drivers/char/cyclades.c b/drivers/char/cyclades.c
--- a/drivers/char/cyclades.c Tue Mar 4 19:30:05 2003
+++ b/drivers/char/cyclades.c Tue Mar 4 19:30:05 2003
@@ -151,7 +151,7 @@
* Revision 2.2.1.4 1998/08/04 11:02:50 ivan
* /proc/cyclades implementation with great collaboration of
* Marc Lewis ;
- * cyy_interrupt was changed to avoid occurence of kernel oopses
+ * cyy_interrupt was changed to avoid occurrence of kernel oopses
* during PPP operation.
*
* Revision 2.2.1.3 1998/06/01 12:09:10 ivan
diff -Nru a/drivers/char/drm/drm_vm.h b/drivers/char/drm/drm_vm.h
--- a/drivers/char/drm/drm_vm.h Tue Mar 4 19:30:14 2003
+++ b/drivers/char/drm/drm_vm.h Tue Mar 4 19:30:14 2003
@@ -147,7 +147,7 @@
}
/* Special close routine which deletes map information if we are the last
- * person to close a mapping and its not in the global maplist.
+ * person to close a mapping and it's not in the global maplist.
*/
void DRM(vm_shm_close)(struct vm_area_struct *vma)
diff -Nru a/drivers/char/drm/i810_drm.h b/drivers/char/drm/i810_drm.h
--- a/drivers/char/drm/i810_drm.h Tue Mar 4 19:30:03 2003
+++ b/drivers/char/drm/i810_drm.h Tue Mar 4 19:30:03 2003
@@ -38,7 +38,7 @@
* - zbuffer linear offset and pitch -- also invarient
* - drawing origin in back and depth buffers.
*
- * Keep the depth/back buffer state here to acommodate private buffers
+ * Keep the depth/back buffer state here to accommodate private buffers
* in the future.
*/
#define I810_DESTREG_DI0 0 /* CMD_OP_DESTBUFFER_INFO (2 dwords) */
diff -Nru a/drivers/char/drm/i830_drm.h b/drivers/char/drm/i830_drm.h
--- a/drivers/char/drm/i830_drm.h Tue Mar 4 19:30:07 2003
+++ b/drivers/char/drm/i830_drm.h Tue Mar 4 19:30:07 2003
@@ -68,7 +68,7 @@
* - zbuffer linear offset and pitch -- also invarient
* - drawing origin in back and depth buffers.
*
- * Keep the depth/back buffer state here to acommodate private buffers
+ * Keep the depth/back buffer state here to accommodate private buffers
* in the future.
*/
diff -Nru a/drivers/char/epca.c b/drivers/char/epca.c
--- a/drivers/char/epca.c Tue Mar 4 19:30:10 2003
+++ b/drivers/char/epca.c Tue Mar 4 19:30:10 2003
@@ -897,7 +897,7 @@
Remember copy_from_user WILL generate a page fault if the
user memory being accessed has been swapped out. This can
cause this routine to temporarily sleep while this page
- fault is occuring.
+ fault is occurring.
----------------------------------------------------------------- */
@@ -1865,7 +1865,7 @@
case PCXI:
board_id = inb((int)bd->port);
if ((board_id & 0x1) == 0x1)
- { /* Begin its an XI card */
+ { /* Begin it's an XI card */
/* Is it a 64K board */
if ((board_id & 0x30) == 0)
@@ -2743,11 +2743,11 @@
/* ---------------------------------------------------------------
Command sets channels iflag structure on the board. Such things
- as input soft flow control, handeling of parity errors, and
- break handeling are all set here.
+ as input soft flow control, handling of parity errors, and
+ break handling are all set here.
------------------------------------------------------------------- */
- /* break handeling, parity handeling, input stripping, flow control chars */
+ /* break handling, parity handling, input stripping, flow control chars */
fepcmd(ch, SETIFLAGS, (unsigned int) ch->fepiflag, 0, 0, 0);
}
@@ -3516,7 +3516,7 @@
/* ------------------------------------------------------------------
The below routines pc_throttle and pc_unthrottle are used
to slow (And resume) the receipt of data into the kernels
- receive buffers. The exact occurence of this depends on the
+ receive buffers. The exact occurrence of this depends on the
size of the kernels receive buffer and what the 'watermarks'
are set to for that buffer. See the n_ttys.c file for more
details.
diff -Nru a/drivers/char/ftape/lowlevel/ftape-calibr.c b/drivers/char/ftape/lowlevel/ftape-calibr.c
--- a/drivers/char/ftape/lowlevel/ftape-calibr.c Tue Mar 4 19:30:05 2003
+++ b/drivers/char/ftape/lowlevel/ftape-calibr.c Tue Mar 4 19:30:05 2003
@@ -56,7 +56,7 @@
* used directly to implement fine-grained timeouts. However, on
* Alpha PCs, the 8254 is *not* used to implement the clock tick
* (which is 1024 Hz, normally) and the 8254 timer runs at some
- * "random" frequency (it seems to run at 18Hz, but its not safe to
+ * "random" frequency (it seems to run at 18Hz, but it's not safe to
* rely on this value). Instead, we use the Alpha's "rpcc"
* instruction to read cycle counts. As this is a 32 bit counter,
* it will overflow only once per 30 seconds (on a 200MHz machine),
diff -Nru a/drivers/char/ftape/lowlevel/ftape_syms.c b/drivers/char/ftape/lowlevel/ftape_syms.c
--- a/drivers/char/ftape/lowlevel/ftape_syms.c Tue Mar 4 19:30:07 2003
+++ b/drivers/char/ftape/lowlevel/ftape_syms.c Tue Mar 4 19:30:07 2003
@@ -22,7 +22,7 @@
*
* This file contains the symbols that the ftape low level
* part of the QIC-40/80/3010/3020 floppy-tape driver "ftape"
- * exports to it's high level clients
+ * exports to its high level clients
*/
#include
diff -Nru a/drivers/char/ftape/zftape/zftape-vtbl.h b/drivers/char/ftape/zftape/zftape-vtbl.h
--- a/drivers/char/ftape/zftape/zftape-vtbl.h Tue Mar 4 19:30:07 2003
+++ b/drivers/char/ftape/zftape/zftape-vtbl.h Tue Mar 4 19:30:07 2003
@@ -176,7 +176,7 @@
const zft_position *pos);
/* this function decrements the zft_seg_pos counter if we are right
- * at the beginning of a segment. This is to handel fsfm/bsfm -- we
+ * at the beginning of a segment. This is to handle fsfm/bsfm -- we
* need to position before the eof mark. NOTE: zft_tape_pos is not
* changed
*/
diff -Nru a/drivers/char/ftape/zftape/zftape-write.c b/drivers/char/ftape/zftape/zftape-write.c
--- a/drivers/char/ftape/zftape/zftape-write.c Tue Mar 4 19:30:09 2003
+++ b/drivers/char/ftape/zftape/zftape-write.c Tue Mar 4 19:30:09 2003
@@ -357,7 +357,7 @@
*volume = zft_find_volume(pos->seg_pos);
DUMP_VOLINFO(ft_t_noise, "", *volume);
zft_just_before_eof = 0;
- /* now merge with old data if neccessary */
+ /* now merge with old data if necessary */
if (!zft_qic_mode && pos->seg_byte_pos != 0){
result = zft_fetch_segment(pos->seg_pos,
zft_deblock_buf,
diff -Nru a/drivers/char/generic_serial.c b/drivers/char/generic_serial.c
--- a/drivers/char/generic_serial.c Tue Mar 4 19:30:09 2003
+++ b/drivers/char/generic_serial.c Tue Mar 4 19:30:09 2003
@@ -142,14 +142,14 @@
/* Can't copy more? break out! */
if (c <= 0) break;
- if (from_user)
+ if (from_user) {
if (copy_from_user (port->xmit_buf + port->xmit_head,
buf, c)) {
up (& port->port_write_sem);
return -EFAULT;
}
- else
+ } else
memcpy (port->xmit_buf + port->xmit_head, buf, c);
port -> xmit_cnt += c;
diff -Nru a/drivers/char/hangcheck-timer.c b/drivers/char/hangcheck-timer.c
--- a/drivers/char/hangcheck-timer.c Tue Mar 4 19:30:07 2003
+++ b/drivers/char/hangcheck-timer.c Tue Mar 4 19:30:07 2003
@@ -16,7 +16,7 @@
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
- * You should have recieved a copy of the GNU General Public
+ * You should have received a copy of the GNU General Public
* License along with this program; if not, write to the
* Free Software Foundation, Inc., 59 Temple Place - Suite 330,
* Boston, MA 021110-1307, USA.
diff -Nru a/drivers/char/mwave/tp3780i.h b/drivers/char/mwave/tp3780i.h
--- a/drivers/char/mwave/tp3780i.h Tue Mar 4 19:30:04 2003
+++ b/drivers/char/mwave/tp3780i.h Tue Mar 4 19:30:04 2003
@@ -72,7 +72,7 @@
#define TP_CFG_DisableLBusTimeout 0 /* Enable LBus timeout */
#define TP_CFG_N_Divisor 32 /* Clock = 39.1608 Mhz */
#define TP_CFG_M_Multiplier 37 /* " */
-#define TP_CFG_PllBypass 0 /* dont bypass */
+#define TP_CFG_PllBypass 0 /* don't bypass */
#define TP_CFG_ChipletEnable 0xFFFF /* Enable all chiplets */
typedef struct {
diff -Nru a/drivers/char/n_hdlc.c b/drivers/char/n_hdlc.c
--- a/drivers/char/n_hdlc.c Tue Mar 4 19:30:05 2003
+++ b/drivers/char/n_hdlc.c Tue Mar 4 19:30:05 2003
@@ -833,7 +833,7 @@
poll_wait(filp, &tty->read_wait, wait);
poll_wait(filp, &tty->write_wait, wait);
- /* set bits for operations that wont block */
+ /* set bits for operations that won't block */
if(n_hdlc->rx_buf_list.head)
mask |= POLLIN | POLLRDNORM; /* readable */
if (test_bit(TTY_OTHER_CLOSED, &tty->flags))
diff -Nru a/drivers/char/nvram.c b/drivers/char/nvram.c
--- a/drivers/char/nvram.c Tue Mar 4 19:30:07 2003
+++ b/drivers/char/nvram.c Tue Mar 4 19:30:07 2003
@@ -606,7 +606,7 @@
#if MACH == COBALT
-/* the cobalt CMOS has a wider range of it's checksum */
+/* the cobalt CMOS has a wider range of its checksum */
static int cobalt_check_checksum(void)
{
int i;
diff -Nru a/drivers/char/nwflash.c b/drivers/char/nwflash.c
--- a/drivers/char/nwflash.c Tue Mar 4 19:30:14 2003
+++ b/drivers/char/nwflash.c Tue Mar 4 19:30:14 2003
@@ -215,7 +215,7 @@
temp = ((int) (p + count) >> 16) - nBlock + 1;
/*
- * write ends at exactly 64k boundry?
+ * write ends at exactly 64k boundary?
*/
if (((int) (p + count) & 0xFFFF) == 0)
temp -= 1;
diff -Nru a/drivers/char/pcmcia/synclink_cs.c b/drivers/char/pcmcia/synclink_cs.c
--- a/drivers/char/pcmcia/synclink_cs.c Tue Mar 4 19:30:07 2003
+++ b/drivers/char/pcmcia/synclink_cs.c Tue Mar 4 19:30:07 2003
@@ -4505,7 +4505,7 @@
if (debug_level >= DEBUG_LEVEL_INFO)
printk("mgslpc_sppp_rx_done(%s)\n",info->netname);
if (skb == NULL) {
- printk(KERN_NOTICE "%s: cant alloc skb, dropping packet\n",
+ printk(KERN_NOTICE "%s: can't alloc skb, dropping packet\n",
info->netname);
info->netstats.rx_dropped++;
return;
diff -Nru a/drivers/char/rio/list.h b/drivers/char/rio/list.h
--- a/drivers/char/rio/list.h Tue Mar 4 19:30:12 2003
+++ b/drivers/char/rio/list.h Tue Mar 4 19:30:12 2003
@@ -111,7 +111,7 @@
/*
** can_remove_receive( PacketP, PortP ) returns non-zero if PKT_IN_USE is set
** for the next packet on the queue. It will also set PacketP to point to the
-** relevent packet, [having cleared the PKT_IN_USE bit]. If PKT_IN_USE is clear,
+** relevant packet, [having cleared the PKT_IN_USE bit]. If PKT_IN_USE is clear,
** then can_remove_receive() returns 0.
*/
#if defined(MIPS) || defined(nx6000) || defined(drs6000) || defined(UWsparc)
diff -Nru a/drivers/char/rio/parmmap.h b/drivers/char/rio/parmmap.h
--- a/drivers/char/rio/parmmap.h Tue Mar 4 19:30:13 2003
+++ b/drivers/char/rio/parmmap.h Tue Mar 4 19:30:13 2003
@@ -31,7 +31,7 @@
----------------------------------------------------------------------------
Date By Description
----------------------------------------------------------------------------
-6/4/1991 jonb Made changes to accomodate Mips R3230 bus
+6/4/1991 jonb Made changes to accommodate Mips R3230 bus
***************************************************************************/
#ifndef _parmap_h
diff -Nru a/drivers/char/rio/rio_linux.c b/drivers/char/rio/rio_linux.c
--- a/drivers/char/rio/rio_linux.c Tue Mar 4 19:30:10 2003
+++ b/drivers/char/rio/rio_linux.c Tue Mar 4 19:30:10 2003
@@ -464,7 +464,7 @@
recursive calls will hang the machine in the interrupt routine.
- hardware twiddling goes before "recursive". Otherwise when we
- poll the card, and a recursive interrupt happens, we wont
+ poll the card, and a recursive interrupt happens, we won't
ack the card, so it might keep on interrupting us. (especially
level sensitive interrupt systems like PCI).
diff -Nru a/drivers/char/rio/rioinit.c b/drivers/char/rio/rioinit.c
--- a/drivers/char/rio/rioinit.c Tue Mar 4 19:30:05 2003
+++ b/drivers/char/rio/rioinit.c Tue Mar 4 19:30:05 2003
@@ -145,7 +145,7 @@
p->RIOHosts[p->RIONumHosts].PaddrP = info->location;
/*
- ** Check that we are able to accomodate another host
+ ** Check that we are able to accommodate another host
*/
if ( p->RIONumHosts >= RIO_HOSTS )
{
diff -Nru a/drivers/char/rio/rioparam.c b/drivers/char/rio/rioparam.c
--- a/drivers/char/rio/rioparam.c Tue Mar 4 19:30:14 2003
+++ b/drivers/char/rio/rioparam.c Tue Mar 4 19:30:14 2003
@@ -714,7 +714,7 @@
/*
** can_remove_receive(PktP,P) returns non-zero if PKT_IN_USE is set
** for the next packet on the queue. It will also set PktP to point to the
-** relevent packet, [having cleared the PKT_IN_USE bit]. If PKT_IN_USE is clear,
+** relevant packet, [having cleared the PKT_IN_USE bit]. If PKT_IN_USE is clear,
** then can_remove_receive() returns 0.
*/
int
diff -Nru a/drivers/char/rio/rioroute.c b/drivers/char/rio/rioroute.c
--- a/drivers/char/rio/rioroute.c Tue Mar 4 19:30:09 2003
+++ b/drivers/char/rio/rioroute.c Tue Mar 4 19:30:09 2003
@@ -521,7 +521,7 @@
/*
** If either of the modules on this unit is read-only or write-only
** or none-xprint, then we need to transfer that info over to the
- ** relevent ports.
+ ** relevant ports.
*/
if ( HostP->Mapping[ThisUnit].SysPort != NO_PORT )
{
@@ -976,7 +976,7 @@
/*
** We loop for all entries even after finding an entry and
** zeroing it because we may have two entries to delete if
- ** its a 16 port RTA.
+ ** it's a 16 port RTA.
*/
for (entry = 0; entry < TOTAL_MAP_ENTRIES; entry++)
{
diff -Nru a/drivers/char/rio/riotable.c b/drivers/char/rio/riotable.c
--- a/drivers/char/rio/riotable.c Tue Mar 4 19:30:13 2003
+++ b/drivers/char/rio/riotable.c Tue Mar 4 19:30:13 2003
@@ -309,7 +309,7 @@
}
/*
- ** wow! if we get here then its a goody!
+ ** wow! if we get here then it's a goody!
*/
/*
diff -Nru a/drivers/char/rio/riotty.c b/drivers/char/rio/riotty.c
--- a/drivers/char/rio/riotty.c Tue Mar 4 19:30:07 2003
+++ b/drivers/char/rio/riotty.c Tue Mar 4 19:30:07 2003
@@ -737,10 +737,10 @@
RIOCookMode(struct ttystatics *tp)
{
/*
- ** We cant handle tm.c_mstate != 0 on SCO
- ** We cant handle mapping
- ** We cant handle non-ttwrite line disc.
- ** We cant handle lflag XCASE
+ ** We can't handle tm.c_mstate != 0 on SCO
+ ** We can't handle mapping
+ ** We can't handle non-ttwrite line disc.
+ ** We can't handle lflag XCASE
** We can handle oflag OPOST & (OCRNL, ONLCR, TAB3)
*/
diff -Nru a/drivers/char/rocket_int.h b/drivers/char/rocket_int.h
--- a/drivers/char/rocket_int.h Tue Mar 4 19:30:05 2003
+++ b/drivers/char/rocket_int.h Tue Mar 4 19:30:05 2003
@@ -834,7 +834,7 @@
/***************************************************************************
Function: sInitChanDefaults
-Purpose: Initialize a channel structure to it's default state.
+Purpose: Initialize a channel structure to its default state.
Call: sInitChanDefaults(ChP)
CHANNEL_T *ChP; Ptr to the channel structure
Comments: This function must be called once for every channel structure
diff -Nru a/drivers/char/scc.h b/drivers/char/scc.h
--- a/drivers/char/scc.h Tue Mar 4 19:30:13 2003
+++ b/drivers/char/scc.h Tue Mar 4 19:30:13 2003
@@ -428,7 +428,7 @@
* for that purpose. They assume that a local variable 'port' is
* declared and pointing to the port's scc_struct entry. The
* variants with "_NB" appended should be used if no other SCC
- * accesses follow immediatly (within 0.5 usecs). They just skip the
+ * accesses follow immediately (within 0.5 usecs). They just skip the
* final delay nops.
*
* Please note that accesses to SCC registers should only take place
diff -Nru a/drivers/char/ser_a2232.c b/drivers/char/ser_a2232.c
--- a/drivers/char/ser_a2232.c Tue Mar 4 19:30:05 2003
+++ b/drivers/char/ser_a2232.c Tue Mar 4 19:30:05 2003
@@ -590,7 +590,7 @@
printk("A2232: 65EC02 software sent SYNC event, don't know what to do. Ignoring.");
break;
default:
- printk("A2232: 65EC02 software broken, unknown event type %d occured.\n",ibuf[bufpos-1]);
+ printk("A2232: 65EC02 software broken, unknown event type %d occurred.\n",ibuf[bufpos-1]);
} /* event type switch */
break;
case A2232INCTL_CHAR:
@@ -599,7 +599,7 @@
bufpos++;
break;
default:
- printk("A2232: 65EC02 software broken, unknown data type %d occured.\n",cbuf[bufpos]);
+ printk("A2232: 65EC02 software broken, unknown data type %d occurred.\n",cbuf[bufpos]);
bufpos++;
} /* switch on input data type */
} /* while there's something in the buffer */
diff -Nru a/drivers/char/sx.c b/drivers/char/sx.c
--- a/drivers/char/sx.c Tue Mar 4 19:30:07 2003
+++ b/drivers/char/sx.c Tue Mar 4 19:30:07 2003
@@ -1216,7 +1216,7 @@
recursive calls will hang the machine in the interrupt routine.
- hardware twiddling goes before "recursive". Otherwise when we
- poll the card, and a recursive interrupt happens, we wont
+ poll the card, and a recursive interrupt happens, we won't
ack the card, so it might keep on interrupting us. (especially
level sensitive interrupt systems like PCI).
diff -Nru a/drivers/char/synclink.c b/drivers/char/synclink.c
--- a/drivers/char/synclink.c Tue Mar 4 19:30:07 2003
+++ b/drivers/char/synclink.c Tue Mar 4 19:30:07 2003
@@ -4260,7 +4260,7 @@
if ( info->tx_holding_count ) {
/* determine if we have enough tx dma buffers
- * to accomodate the next tx frame
+ * to accommodate the next tx frame
*/
struct tx_holding_buffer *ptx =
&info->tx_holding_buffers[info->get_tx_holding_index];
@@ -7621,7 +7621,7 @@
status = info->rx_buffer_list[0].status;
if ( status & (BIT8 + BIT3 + BIT1) ) {
- /* receive error has occured */
+ /* receive error has occurred */
rc = FALSE;
} else {
if ( memcmp( info->tx_buffer_list[0].virt_addr ,
@@ -8103,7 +8103,7 @@
if (debug_level >= DEBUG_LEVEL_INFO)
printk("mgsl_sppp_rx_done(%s)\n",info->netname);
if (skb == NULL) {
- printk(KERN_NOTICE "%s: cant alloc skb, dropping packet\n",
+ printk(KERN_NOTICE "%s: can't alloc skb, dropping packet\n",
info->netname);
info->netstats.rx_dropped++;
return;
diff -Nru a/drivers/char/synclinkmp.c b/drivers/char/synclinkmp.c
--- a/drivers/char/synclinkmp.c Tue Mar 4 19:30:05 2003
+++ b/drivers/char/synclinkmp.c Tue Mar 4 19:30:05 2003
@@ -1829,7 +1829,7 @@
if (debug_level >= DEBUG_LEVEL_INFO)
printk("sppp_rx_done(%s)\n",info->netname);
if (skb == NULL) {
- printk(KERN_NOTICE "%s: cant alloc skb, dropping packet\n",
+ printk(KERN_NOTICE "%s: can't alloc skb, dropping packet\n",
info->netname);
info->netstats.rx_dropped++;
return;
diff -Nru a/drivers/char/tty_io.c b/drivers/char/tty_io.c
--- a/drivers/char/tty_io.c Tue Mar 4 19:30:10 2003
+++ b/drivers/char/tty_io.c Tue Mar 4 19:30:10 2003
@@ -1944,27 +1944,25 @@
schedule_delayed_work(&tty->flip.work, 1);
return;
}
+
+ spin_lock_irqsave(&tty->read_lock, flags);
if (tty->flip.buf_num) {
cp = tty->flip.char_buf + TTY_FLIPBUF_SIZE;
fp = tty->flip.flag_buf + TTY_FLIPBUF_SIZE;
tty->flip.buf_num = 0;
-
- local_irq_save(flags); // FIXME: is this safe?
tty->flip.char_buf_ptr = tty->flip.char_buf;
tty->flip.flag_buf_ptr = tty->flip.flag_buf;
} else {
cp = tty->flip.char_buf;
fp = tty->flip.flag_buf;
tty->flip.buf_num = 1;
-
- local_irq_save(flags); // FIXME: is this safe?
tty->flip.char_buf_ptr = tty->flip.char_buf + TTY_FLIPBUF_SIZE;
tty->flip.flag_buf_ptr = tty->flip.flag_buf + TTY_FLIPBUF_SIZE;
}
count = tty->flip.count;
tty->flip.count = 0;
- local_irq_restore(flags); // FIXME: is this safe?
-
+ spin_unlock_irqrestore(&tty->read_lock, flags);
+
tty->ldisc.receive_buf(tty, cp, fp, count);
}
diff -Nru a/drivers/char/vt.c b/drivers/char/vt.c
--- a/drivers/char/vt.c Tue Mar 4 19:30:13 2003
+++ b/drivers/char/vt.c Tue Mar 4 19:30:13 2003
@@ -1882,7 +1882,7 @@
buf = con_buf;
}
- /* At this point 'buf' is guarenteed to be a kernel buffer
+ /* At this point 'buf' is guaranteed to be a kernel buffer
* and therefore no access to userspace (and therefore sleeping)
* will be needed. The con_buf_sem serializes all tty based
* console rendering and vcs write/read operations. We hold
@@ -2872,7 +2872,7 @@
* this is done in order to maintain compatibility with the EGA/VGA fonts. It
* is upto the actual low-level console-driver convert data into its favorite
* format (maybe we should add a `fontoffset' field to the `display'
- * structure so we wont have to convert the fontdata all the time.
+ * structure so we won't have to convert the fontdata all the time.
* /Jes
*/
diff -Nru a/drivers/char/watchdog/Kconfig b/drivers/char/watchdog/Kconfig
--- a/drivers/char/watchdog/Kconfig Tue Mar 4 19:30:11 2003
+++ b/drivers/char/watchdog/Kconfig Tue Mar 4 19:30:11 2003
@@ -313,6 +313,19 @@
You can compile this driver directly into the kernel, or use
it as a module. The module will be called sc520_wdt.
+config AMD7XX_TCO
+ tristate "AMD 766/768 TCO Timer/Watchdog"
+ depends on WATCHDOG
+ help
+ This is the driver for the hardware watchdog built in to the
+ AMD 766/768 chipsets.
+ This watchdog simply watches your kernel to make sure it doesn't
+ freeze, and if it does, it reboots your computer after a certain
+ amount of time.
+
+ You can compile this driver directly into the kernel, or use
+ it as a module. The module will be called amd7xx_tco.
+
config ALIM7101_WDT
tristate "ALi M7101 PMU Computer Watchdog"
depends on WATCHDOG
diff -Nru a/drivers/char/watchdog/Makefile b/drivers/char/watchdog/Makefile
--- a/drivers/char/watchdog/Makefile Tue Mar 4 19:30:11 2003
+++ b/drivers/char/watchdog/Makefile Tue Mar 4 19:30:11 2003
@@ -30,3 +30,4 @@
obj-$(CONFIG_SC1200_WDT) += sc1200wdt.o
obj-$(CONFIG_WAFER_WDT) += wafer5823wdt.o
obj-$(CONFIG_CPU5_WDT) += cpu5wdt.o
+obj-$(CONFIG_AMD7XX_TCO) += amd7xx_tco.o
diff -Nru a/drivers/char/watchdog/acquirewdt.c b/drivers/char/watchdog/acquirewdt.c
--- a/drivers/char/watchdog/acquirewdt.c Tue Mar 4 19:30:10 2003
+++ b/drivers/char/watchdog/acquirewdt.c Tue Mar 4 19:30:10 2003
@@ -141,8 +141,6 @@
spin_unlock(&acq_lock);
return -EBUSY;
}
- if (nowayout)
- MOD_INC_USE_COUNT;
/* Activate */
acq_is_open=1;
diff -Nru a/drivers/char/watchdog/amd7xx_tco.c b/drivers/char/watchdog/amd7xx_tco.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/drivers/char/watchdog/amd7xx_tco.c Tue Mar 4 19:30:14 2003
@@ -0,0 +1,373 @@
+/*
+ * AMD 766/768 TCO Timer Driver
+ * (c) Copyright 2002 Zwane Mwaikambo
+ * All Rights Reserved.
+ *
+ * Parts from;
+ * Hardware driver for the AMD 768 Random Number Generator (RNG)
+ * (c) Copyright 2001 Red Hat Inc
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation.
+ *
+ * The author(s) of this software shall not be held liable for damages
+ * of any nature resulting due to the use of this software. This
+ * software is provided AS-IS with no warranties.
+ *
+ */
+
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+
+#define AMDTCO_MODULE_VER "build 20020601"
+#define AMDTCO_MODULE_NAME "amd7xx_tco"
+#define PFX AMDTCO_MODULE_NAME ": "
+
+#define MAX_TIMEOUT 38 /* max of 38 seconds */
+
+/* pmbase registers */
+#define GLOBAL_SMI_REG 0x2a
+#define TCO_EN (1 << 1) /* bit 1 in global SMI register */
+#define TCO_RELOAD_REG 0x40 /* bits 0-5 are current count, 6-7 are reserved */
+#define TCO_INITVAL_REG 0x41 /* bits 0-5 are value to load, 6-7 are reserved */
+#define TCO_TIMEOUT_MASK 0x3f
+#define TCO_STATUS2_REG 0x46
+#define NDTO_STS2 (1 << 1) /* we're interested in the second timeout */
+#define BOOT_STS (1 << 2) /* will be set if NDTO_STS2 was set before reboot */
+#define TCO_CTRL1_REG 0x48
+#define TCO_HALT (1 << 11)
+
+static char banner[] __initdata = KERN_INFO PFX AMDTCO_MODULE_VER;
+static int timeout = 38;
+static u32 pmbase; /* PMxx I/O base */
+static struct pci_dev *dev;
+static struct semaphore open_sem;
+spinlock_t amdtco_lock; /* only for device access */
+static int expect_close = 0;
+
+MODULE_PARM(timeout, "i");
+MODULE_PARM_DESC(timeout, "range is 0-38 seconds, default is 38");
+
+static inline int amdtco_status(void)
+{
+ u16 reg;
+ int status = 0;
+
+ reg = inb(pmbase+TCO_CTRL1_REG);
+ if ((reg & TCO_HALT) == 0)
+ status |= WDIOF_KEEPALIVEPING;
+
+ reg = inb(pmbase+TCO_STATUS2_REG);
+ if (reg & BOOT_STS)
+ status |= WDIOF_CARDRESET;
+
+ return status;
+}
+
+static inline void amdtco_ping(void)
+{
+ u8 reg;
+
+ spin_lock(&amdtco_lock);
+ reg = inb(pmbase+TCO_RELOAD_REG);
+ outb(1 | reg, pmbase+TCO_RELOAD_REG);
+ spin_unlock(&amdtco_lock);
+}
+
+static inline int amdtco_gettimeout(void)
+{
+ return inb(TCO_RELOAD_REG) & TCO_TIMEOUT_MASK;
+}
+
+static inline void amdtco_settimeout(unsigned int timeout)
+{
+ u8 reg;
+
+ spin_lock(&amdtco_lock);
+ reg = inb(pmbase+TCO_INITVAL_REG);
+ reg |= timeout & TCO_TIMEOUT_MASK;
+ outb(reg, pmbase+TCO_INITVAL_REG);
+ spin_unlock(&amdtco_lock);
+}
+
+static inline void amdtco_global_enable(void)
+{
+ u16 reg;
+
+ spin_lock(&amdtco_lock);
+ reg = inw(pmbase+GLOBAL_SMI_REG);
+ reg |= TCO_EN;
+ outw(reg, pmbase+GLOBAL_SMI_REG);
+ spin_unlock(&amdtco_lock);
+}
+
+static inline void amdtco_enable(void)
+{
+ u16 reg;
+
+ spin_lock(&amdtco_lock);
+ reg = inw(pmbase+TCO_CTRL1_REG);
+ reg &= ~TCO_HALT;
+ outw(reg, pmbase+TCO_CTRL1_REG);
+ spin_unlock(&amdtco_lock);
+}
+
+static inline void amdtco_disable(void)
+{
+ u16 reg;
+
+ spin_lock(&amdtco_lock);
+ reg = inw(pmbase+TCO_CTRL1_REG);
+ reg |= TCO_HALT;
+ outw(reg, pmbase+TCO_CTRL1_REG);
+ spin_unlock(&amdtco_lock);
+}
+
+static int amdtco_fop_open(struct inode *inode, struct file *file)
+{
+ if (down_trylock(&open_sem))
+ return -EBUSY;
+
+ if (timeout > MAX_TIMEOUT)
+ timeout = MAX_TIMEOUT;
+
+ amdtco_settimeout(timeout);
+ amdtco_global_enable();
+ amdtco_ping();
+ printk(KERN_INFO PFX "Watchdog enabled, timeout = %d/%d seconds",
+ amdtco_gettimeout(), timeout);
+
+ return 0;
+}
+
+
+static int amdtco_fop_ioctl(struct inode *inode, struct file *file, unsigned int cmd, unsigned long arg)
+{
+ int new_timeout;
+ int tmp;
+
+ static struct watchdog_info ident = {
+ .options = WDIOF_SETTIMEOUT | WDIOF_CARDRESET,
+ .identity = "AMD 766/768"
+ };
+
+ switch (cmd) {
+ default:
+ return -ENOTTY;
+
+ case WDIOC_GETSUPPORT:
+ if (copy_to_user((struct watchdog_info *)arg, &ident, sizeof ident))
+ return -EFAULT;
+ return 0;
+
+ case WDIOC_GETSTATUS:
+ return put_user(amdtco_status(), (int *)arg);
+
+ case WDIOC_KEEPALIVE:
+ amdtco_ping();
+ return 0;
+
+ case WDIOC_SETTIMEOUT:
+ if (get_user(new_timeout, (int *)arg))
+ return -EFAULT;
+
+ if (new_timeout < 0)
+ return -EINVAL;
+
+ if (new_timeout > MAX_TIMEOUT)
+ new_timeout = MAX_TIMEOUT;
+
+ timeout = new_timeout;
+ amdtco_settimeout(timeout);
+ /* fall through and return the new timeout */
+
+ case WDIOC_GETTIMEOUT:
+ return put_user(amdtco_gettimeout(), (int *)arg);
+
+ case WDIOC_SETOPTIONS:
+ if (copy_from_user(&tmp, (int *)arg, sizeof tmp))
+ return -EFAULT;
+
+ if (tmp & WDIOS_DISABLECARD)
+ amdtco_disable();
+
+ if (tmp & WDIOS_ENABLECARD)
+ amdtco_enable();
+
+ return 0;
+ }
+}
+
+
+static int amdtco_fop_release(struct inode *inode, struct file *file)
+{
+ if (expect_close) {
+ amdtco_disable();
+ printk(KERN_INFO PFX "Watchdog disabled\n");
+ } else {
+ amdtco_ping();
+ printk(KERN_CRIT PFX "Unexpected close!, timeout in %d seconds)\n", timeout);
+ }
+
+ up(&open_sem);
+ return 0;
+}
+
+
+static ssize_t amdtco_fop_write(struct file *file, const char *data, size_t len, loff_t *ppos)
+{
+ if (ppos != &file->f_pos)
+ return -ESPIPE;
+
+ if (len) {
+#ifndef CONFIG_WATCHDOG_NOWAYOUT
+ size_t i;
+ char c;
+ expect_close = 0;
+
+ for (i = 0; i != len; i++) {
+ if (get_user(c, data + i))
+ return -EFAULT;
+
+ if (c == 'V')
+ expect_close = 1;
+ }
+#endif
+ amdtco_ping();
+ return len;
+ }
+
+ return 0;
+}
+
+
+static int amdtco_notify_sys(struct notifier_block *this, unsigned long code, void *unused)
+{
+ if (code == SYS_DOWN || code == SYS_HALT)
+ amdtco_disable();
+
+ return NOTIFY_DONE;
+}
+
+
+static struct notifier_block amdtco_notifier =
+{
+ .notifier_call = amdtco_notify_sys
+};
+
+static struct file_operations amdtco_fops =
+{
+ .owner = THIS_MODULE,
+ .write = amdtco_fop_write,
+ .ioctl = amdtco_fop_ioctl,
+ .open = amdtco_fop_open,
+ .release = amdtco_fop_release
+};
+
+static struct miscdevice amdtco_miscdev =
+{
+ .minor = WATCHDOG_MINOR,
+ .name = "watchdog",
+ .fops = &amdtco_fops
+};
+
+static struct pci_device_id amdtco_pci_tbl[] __initdata = {
+ /* AMD 766 PCI_IDs here */
+ { PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_OPUS_7443, PCI_ANY_ID, PCI_ANY_ID, },
+ { 0, }
+};
+
+MODULE_DEVICE_TABLE (pci, amdtco_pci_tbl);
+
+static int __init amdtco_init(void)
+{
+ int ret;
+
+ sema_init(&open_sem, 1);
+ spin_lock_init(&amdtco_lock);
+
+ pci_for_each_dev(dev) {
+ if (pci_match_device (amdtco_pci_tbl, dev) != NULL)
+ goto found_one;
+ }
+
+ return -ENODEV;
+
+found_one:
+
+ if ((ret = register_reboot_notifier(&amdtco_notifier))) {
+ printk(KERN_ERR PFX "Unable to register reboot notifier err = %d\n", ret);
+ goto out_clean;
+ }
+
+ if ((ret = misc_register(&amdtco_miscdev))) {
+ printk(KERN_ERR PFX "Unable to register miscdev on minor %d\n", WATCHDOG_MINOR);
+ goto out_unreg_reboot;
+ }
+
+ pci_read_config_dword(dev, 0x58, &pmbase);
+ pmbase &= 0x0000FF00;
+
+ if (pmbase == 0) {
+ printk (KERN_ERR PFX "power management base not set\n");
+ ret = -EIO;
+ goto out_unreg_misc;
+ }
+
+ /* ret = 0; */
+ printk(banner);
+ goto out_clean;
+
+out_unreg_misc:
+ misc_deregister(&amdtco_miscdev);
+out_unreg_reboot:
+ unregister_reboot_notifier(&amdtco_notifier);
+out_clean:
+ return ret;
+}
+
+static void __exit amdtco_exit(void)
+{
+ misc_deregister(&amdtco_miscdev);
+ unregister_reboot_notifier(&amdtco_notifier);
+}
+
+
+#ifndef MODULE
+static int __init amdtco_setup(char *str)
+{
+ int ints[4];
+
+ str = get_options (str, ARRAY_SIZE(ints), ints);
+ if (ints[0] > 0)
+ timeout = ints[1];
+
+ return 1;
+}
+
+__setup("amd7xx_tco=", amdtco_setup);
+#endif
+
+module_init(amdtco_init);
+module_exit(amdtco_exit);
+
+MODULE_AUTHOR("Zwane Mwaikambo ");
+MODULE_DESCRIPTION("AMD 766/768 TCO Timer Driver");
+MODULE_LICENSE("GPL");
+EXPORT_NO_SYMBOLS;
+
diff -Nru a/drivers/char/watchdog/i810-tco.c b/drivers/char/watchdog/i810-tco.c
--- a/drivers/char/watchdog/i810-tco.c Tue Mar 4 19:30:04 2003
+++ b/drivers/char/watchdog/i810-tco.c Tue Mar 4 19:30:04 2003
@@ -161,7 +161,7 @@
}
/*
- * Reload (trigger) the timer. Lock is needed so we dont reload it during
+ * Reload (trigger) the timer. Lock is needed so we don't reload it during
* a reprogramming event
*/
@@ -218,7 +218,7 @@
tco_expect_close = 0;
- /* scan to see wether or not we got the magic character */
+ /* scan to see whether or not we got the magic character */
for (i = 0; i != len; i++) {
u8 c;
if(get_user(c, data+i))
diff -Nru a/drivers/char/watchdog/ib700wdt.c b/drivers/char/watchdog/ib700wdt.c
--- a/drivers/char/watchdog/ib700wdt.c Tue Mar 4 19:30:13 2003
+++ b/drivers/char/watchdog/ib700wdt.c Tue Mar 4 19:30:13 2003
@@ -50,6 +50,8 @@
static spinlock_t ibwdt_lock;
static int expect_close = 0;
+#define PFX "ib700wdt: "
+
/*
*
* Watchdog Timer Configuration
@@ -226,8 +228,6 @@
spin_unlock(&ibwdt_lock);
return -EBUSY;
}
- if (nowayout)
- MOD_INC_USE_COUNT;
/* Activate */
ibwdt_is_open = 1;
@@ -247,7 +247,7 @@
if (expect_close)
outb_p(wd_times[wd_margin], WDT_STOP);
else
- printk(KERN_CRIT "WDT device closed unexpectedly. WDT will not stop!\n");
+ printk(KERN_CRIT PFX "WDT device closed unexpectedly. WDT will not stop!\n");
ibwdt_is_open = 0;
spin_unlock(&ibwdt_lock);
@@ -300,29 +300,49 @@
.priority = 0
};
-static int __init
-ibwdt_init(void)
+static int __init ibwdt_init(void)
{
- printk("WDT driver for IB700 single board computer initialising.\n");
+ int res;
+
+ printk(KERN_INFO PFX "WDT driver for IB700 single board computer initialising.\n");
spin_lock_init(&ibwdt_lock);
- if (misc_register(&ibwdt_miscdev))
- return -ENODEV;
+ res = misc_register(&ibwdt_miscdev);
+ if (res) {
+ printk (KERN_ERR PFX "failed to register misc device\n");
+ goto out_nomisc;
+ }
+
#if WDT_START != WDT_STOP
if (!request_region(WDT_STOP, 1, "IB700 WDT")) {
- misc_deregister(&ibwdt_miscdev);
- return -EIO;
+ printk (KERN_ERR PFX "STOP method I/O %X is not available.\n", WDT_STOP);
+ res = -EIO;
+ goto out_nostopreg;
}
#endif
+
if (!request_region(WDT_START, 1, "IB700 WDT")) {
-#if WDT_START != WDT_STOP
- release_region(WDT_STOP, 1);
-#endif
- misc_deregister(&ibwdt_miscdev);
- return -EIO;
+ printk (KERN_ERR PFX "START method I/O %X is not available.\n", WDT_START);
+ res = -EIO;
+ goto out_nostartreg;
+ }
+ res = register_reboot_notifier(&ibwdt_notifier);
+ if (res) {
+ printk (KERN_ERR PFX "Failed to register reboot notifier.\n");
+ goto out_noreboot;
}
- register_reboot_notifier(&ibwdt_notifier);
return 0;
+
+out_noreboot:
+ release_region(WDT_START, 1);
+out_nostartreg:
+#if WDT_START != WDT_STOP
+ release_region(WDT_STOP, 1);
+#endif
+out_nostopreg:
+ misc_deregister(&ibwdt_miscdev);
+out_nomisc:
+ return res;
}
static void __exit
diff -Nru a/drivers/char/watchdog/indydog.c b/drivers/char/watchdog/indydog.c
--- a/drivers/char/watchdog/indydog.c Tue Mar 4 19:30:13 2003
+++ b/drivers/char/watchdog/indydog.c Tue Mar 4 19:30:13 2003
@@ -53,9 +53,6 @@
if( test_and_set_bit(0,&indydog_alive) )
return -EBUSY;
- if (nowayout)
- MOD_INC_USE_COUNT;
-
/*
* Activate timer
*/
diff -Nru a/drivers/char/watchdog/machzwd.c b/drivers/char/watchdog/machzwd.c
--- a/drivers/char/watchdog/machzwd.c Tue Mar 4 19:30:05 2003
+++ b/drivers/char/watchdog/machzwd.c Tue Mar 4 19:30:05 2003
@@ -390,9 +390,6 @@
return -EBUSY;
}
- if (nowayout)
- MOD_INC_USE_COUNT;
-
zf_is_open = 1;
spin_unlock(&zf_lock);
diff -Nru a/drivers/char/watchdog/mixcomwd.c b/drivers/char/watchdog/mixcomwd.c
--- a/drivers/char/watchdog/mixcomwd.c Tue Mar 4 19:30:05 2003
+++ b/drivers/char/watchdog/mixcomwd.c Tue Mar 4 19:30:05 2003
@@ -93,9 +93,7 @@
}
mixcomwd_ping();
- if (nowayout) {
- MOD_INC_USE_COUNT;
- } else {
+ if (!nowayout) {
if(mixcomwd_timer_alive) {
del_timer(&mixcomwd_timer);
mixcomwd_timer_alive=0;
diff -Nru a/drivers/char/watchdog/pcwd.c b/drivers/char/watchdog/pcwd.c
--- a/drivers/char/watchdog/pcwd.c Tue Mar 4 19:30:04 2003
+++ b/drivers/char/watchdog/pcwd.c Tue Mar 4 19:30:04 2003
@@ -430,7 +430,7 @@
atomic_inc( &open_allowed );
return -EBUSY;
}
- MOD_INC_USE_COUNT;
+
/* Enable the port */
if (revision == PCWD_REVISION_C) {
spin_lock(&io_lock);
diff -Nru a/drivers/char/watchdog/sbc60xxwdt.c b/drivers/char/watchdog/sbc60xxwdt.c
--- a/drivers/char/watchdog/sbc60xxwdt.c Tue Mar 4 19:30:11 2003
+++ b/drivers/char/watchdog/sbc60xxwdt.c Tue Mar 4 19:30:11 2003
@@ -50,7 +50,7 @@
*
* Why `V' ? Well, `V' is the character in ASCII for the value 86,
* and we all know that 86 is _the_ most random number in the universe.
- * Therefore it is the letter that has the slightest chance of occuring
+ * Therefore it is the letter that has the slightest chance of occurring
* by chance, when the system becomes corrupted.
*
*/
@@ -206,9 +206,7 @@
/* Just in case we're already talking to someone... */
if(wdt_is_open)
return -EBUSY;
- if (nowayout) {
- MOD_INC_USE_COUNT;
- }
+
/* Good, fire up the show */
wdt_is_open = 1;
wdt_startup();
diff -Nru a/drivers/char/watchdog/sc520_wdt.c b/drivers/char/watchdog/sc520_wdt.c
--- a/drivers/char/watchdog/sc520_wdt.c Tue Mar 4 19:30:08 2003
+++ b/drivers/char/watchdog/sc520_wdt.c Tue Mar 4 19:30:08 2003
@@ -229,8 +229,6 @@
return -EBUSY;
/* Good, fire up the show */
wdt_startup();
- if (nowayout)
- MOD_INC_USE_COUNT;
return 0;
default:
@@ -253,11 +251,6 @@
return 0;
}
-static long long fop_llseek(struct file *file, long long offset, int origin)
-{
- return -ESPIPE;
-}
-
static int fop_ioctl(struct inode *inode, struct file *file, unsigned int cmd,
unsigned long arg)
{
@@ -282,7 +275,7 @@
static struct file_operations wdt_fops = {
.owner = THIS_MODULE,
- .llseek = fop_llseek,
+ .llseek = no_llseek,
.write = fop_write,
.open = fop_open,
.release = fop_close,
diff -Nru a/drivers/char/watchdog/shwdt.c b/drivers/char/watchdog/shwdt.c
--- a/drivers/char/watchdog/shwdt.c Tue Mar 4 19:30:05 2003
+++ b/drivers/char/watchdog/shwdt.c Tue Mar 4 19:30:05 2003
@@ -1,9 +1,9 @@
/*
* drivers/char/shwdt.c
*
- * Watchdog driver for integrated watchdog in the SuperH 3/4 processors.
+ * Watchdog driver for integrated watchdog in the SuperH processors.
*
- * Copyright (C) 2001 Paul Mundt
+ * Copyright (C) 2001, 2002 Paul Mundt
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
@@ -26,14 +26,17 @@
#include
#include
-#if defined(CONFIG_CPU_SH4)
+#if defined(CONFIG_CPU_SH5)
+ #define WTCNT CPRC_BASE + 0x10
+ #define WTCSR CPRC_BASE + 0x18
+#elif defined(CONFIG_CPU_SH4)
#define WTCNT 0xffc00008
#define WTCSR 0xffc0000c
#elif defined(CONFIG_CPU_SH3)
#define WTCNT 0xffffff84
#define WTCSR 0xffffff86
#else
- #error "Can't use SH 3/4 watchdog on non-SH 3/4 processor."
+ #error "Can't use SuperH watchdog on this platform"
#endif
#define WTCNT_HIGH 0x5a00
@@ -186,10 +189,6 @@
if (test_and_set_bit(0, &sh_is_open))
return -EBUSY;
- if (nowayout) {
- MOD_INC_USE_COUNT;
- }
-
sh_wdt_start();
break;
@@ -405,8 +404,8 @@
misc_deregister(&sh_wdt_miscdev);
}
-MODULE_AUTHOR("Paul Mundt ");
-MODULE_DESCRIPTION("SH 3/4 watchdog driver");
+MODULE_AUTHOR("Paul Mundt ");
+MODULE_DESCRIPTION("SuperH watchdog driver");
MODULE_LICENSE("GPL");
MODULE_PARM(clock_division_ratio, "i");
MODULE_PARM_DESC(clock_division_ratio, "Clock division ratio. Valid ranges are from 0x5 (1.31ms) to 0x7 (5.25ms). Defaults to 0x7.");
diff -Nru a/drivers/char/watchdog/softdog.c b/drivers/char/watchdog/softdog.c
--- a/drivers/char/watchdog/softdog.c Tue Mar 4 19:30:08 2003
+++ b/drivers/char/watchdog/softdog.c Tue Mar 4 19:30:08 2003
@@ -103,9 +103,7 @@
{
if(test_and_set_bit(0, &timer_alive))
return -EBUSY;
- if (nowayout) {
- MOD_INC_USE_COUNT;
- }
+
/*
* Activate timer
*/
diff -Nru a/drivers/char/watchdog/wdt.c b/drivers/char/watchdog/wdt.c
--- a/drivers/char/watchdog/wdt.c Tue Mar 4 19:30:09 2003
+++ b/drivers/char/watchdog/wdt.c Tue Mar 4 19:30:09 2003
@@ -175,7 +175,7 @@
*
* Handle an interrupt from the board. These are raised when the status
* map changes in what the board considers an interesting way. That means
- * a failure condition occuring.
+ * a failure condition occurring.
*/
void wdt_interrupt(int irq, void *dev_id, struct pt_regs *regs)
diff -Nru a/drivers/char/watchdog/wdt977.c b/drivers/char/watchdog/wdt977.c
--- a/drivers/char/watchdog/wdt977.c Tue Mar 4 19:30:12 2003
+++ b/drivers/char/watchdog/wdt977.c Tue Mar 4 19:30:12 2003
@@ -99,8 +99,6 @@
if (nowayout)
{
- MOD_INC_USE_COUNT;
-
/* do not permit disabling the watchdog by writing 0 to reg. 0xF2 */
if (!timeoutM) timeoutM = DEFAULT_TIMEOUT;
}
diff -Nru a/drivers/char/watchdog/wdt_pci.c b/drivers/char/watchdog/wdt_pci.c
--- a/drivers/char/watchdog/wdt_pci.c Tue Mar 4 19:30:10 2003
+++ b/drivers/char/watchdog/wdt_pci.c Tue Mar 4 19:30:10 2003
@@ -158,7 +158,7 @@
*
* Handle an interrupt from the board. These are raised when the status
* map changes in what the board considers an interesting way. That means
- * a failure condition occuring.
+ * a failure condition occurring.
*/
static void wdtpci_interrupt(int irq, void *dev_id, struct pt_regs *regs)
@@ -365,9 +365,6 @@
if (down_trylock(&open_sem))
return -EBUSY;
- if (nowayout) {
- MOD_INC_USE_COUNT;
- }
/*
* Activate
*/
diff -Nru a/drivers/cpufreq/Kconfig b/drivers/cpufreq/Kconfig
--- a/drivers/cpufreq/Kconfig Tue Mar 4 19:30:13 2003
+++ b/drivers/cpufreq/Kconfig Tue Mar 4 19:30:13 2003
@@ -9,3 +9,30 @@
For details, take a look at linux/Documentation/cpufreq.
If in doubt, say N.
+
+config CPU_FREQ_GOV_USERSPACE
+ tristate "'userspace' governor for userspace frequency scaling"
+ depends on CPU_FREQ
+ help
+ Enable this cpufreq governor when you either want to set the
+ CPU frequency manually or when an userspace programm shall
+ be able to set the CPU dynamically, like on LART
+ ( http://www.lart.tudelft.nl/ )
+
+ For details, take a look at linux/Documentation/cpufreq.
+
+ If in doubt, say Y.
+
+config CPU_FREQ_24_API
+ bool "/proc/sys/cpu/ interface (2.4. / OLD)"
+ depends on CPU_FREQ && SYSCTL && CPU_FREQ_GOV_USERSPACE
+ help
+ This enables the /proc/sys/cpu/ sysctl interface for controlling
+ the CPUFreq,"userspace" governor. This is the same interface
+ as known from the.4.-kernel patches for CPUFreq, and offers
+ the same functionality as long as "userspace" is the
+ selected governor for the specified CPU.
+
+ For details, take a look at linux/Documentation/cpufreq.
+
+ If in doubt, say N.
diff -Nru a/drivers/cpufreq/Makefile b/drivers/cpufreq/Makefile
--- a/drivers/cpufreq/Makefile Tue Mar 4 19:30:14 2003
+++ b/drivers/cpufreq/Makefile Tue Mar 4 19:30:14 2003
@@ -1,3 +1,4 @@
#CPUfreq governors and cross-arch helpers
obj-$(CONFIG_CPU_FREQ_TABLE) += freq_table.o
obj-$(CONFIG_CPU_FREQ_PROC_INTF) += proc_intf.o
+obj-$(CONFIG_CPU_FREQ_GOV_USERSPACE) += userspace.o
diff -Nru a/drivers/cpufreq/userspace.c b/drivers/cpufreq/userspace.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/drivers/cpufreq/userspace.c Tue Mar 4 19:30:14 2003
@@ -0,0 +1,596 @@
+/*
+ * drivers/cpufreq/userspace.c
+ *
+ * Copyright (C) 2001 Russell King
+ * (C) 2002 - 2003 Dominik Brodowski
+ *
+ * $Id:$
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+
+#include
+
+#define CTL_CPU_VARS_SPEED_MAX(cpunr) { \
+ .ctl_name = CPU_NR_FREQ_MAX, \
+ .data = &cpu_max_freq[cpunr], \
+ .procname = "speed-max", \
+ .maxlen = sizeof(cpu_max_freq[cpunr]),\
+ .mode = 0444, \
+ .proc_handler = proc_dointvec, }
+
+#define CTL_CPU_VARS_SPEED_MIN(cpunr) { \
+ .ctl_name = CPU_NR_FREQ_MIN, \
+ .data = &cpu_min_freq[cpunr], \
+ .procname = "speed-min", \
+ .maxlen = sizeof(cpu_min_freq[cpunr]),\
+ .mode = 0444, \
+ .proc_handler = proc_dointvec, }
+
+#define CTL_CPU_VARS_SPEED(cpunr) { \
+ .ctl_name = CPU_NR_FREQ, \
+ .procname = "speed", \
+ .mode = 0644, \
+ .proc_handler = cpufreq_procctl, \
+ .strategy = cpufreq_sysctl, \
+ .extra1 = (void*) (cpunr), }
+
+#define CTL_TABLE_CPU_VARS(cpunr) static ctl_table ctl_cpu_vars_##cpunr[] = {\
+ CTL_CPU_VARS_SPEED_MAX(cpunr), \
+ CTL_CPU_VARS_SPEED_MIN(cpunr), \
+ CTL_CPU_VARS_SPEED(cpunr), \
+ { .ctl_name = 0, }, }
+
+/* the ctl_table entry for each CPU */
+#define CPU_ENUM(s) { \
+ .ctl_name = (CPU_NR + s), \
+ .procname = #s, \
+ .mode = 0555, \
+ .child = ctl_cpu_vars_##s }
+
+/**
+ * A few values needed by the userspace governor
+ */
+static unsigned int cpu_max_freq[NR_CPUS];
+static unsigned int cpu_min_freq[NR_CPUS];
+static unsigned int cpu_cur_freq[NR_CPUS];
+static unsigned int cpu_is_managed[NR_CPUS];
+static struct cpufreq_policy current_policy[NR_CPUS];
+
+static DECLARE_MUTEX (userspace_sem);
+
+
+/* keep track of frequency transitions */
+static int
+userspace_cpufreq_notifier(struct notifier_block *nb, unsigned long val,
+ void *data)
+{
+ struct cpufreq_freqs *freq = data;
+
+ cpu_cur_freq[freq->cpu] = freq->new;
+
+ return 0;
+}
+
+static struct notifier_block userspace_cpufreq_notifier_block = {
+ .notifier_call = userspace_cpufreq_notifier
+};
+
+
+/**
+ * cpufreq_set - set the CPU frequency
+ * @freq: target frequency in kHz
+ * @cpu: CPU for which the frequency is to be set
+ *
+ * Sets the CPU frequency to freq.
+ */
+int cpufreq_set(unsigned int freq, unsigned int cpu)
+{
+ int ret = -EINVAL;
+
+ down(&userspace_sem);
+ if (!cpu_is_managed[cpu])
+ goto err;
+
+ if (freq < cpu_min_freq[cpu])
+ freq = cpu_min_freq[cpu];
+ if (freq > cpu_max_freq[cpu])
+ freq = cpu_max_freq[cpu];
+
+ ret = cpufreq_driver_target_l(¤t_policy[cpu], freq,
+ CPUFREQ_RELATION_L);
+
+ err:
+ up(&userspace_sem);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(cpufreq_set);
+
+
+/**
+ * cpufreq_setmax - set the CPU to the maximum frequency
+ * @cpu - affected cpu;
+ *
+ * Sets the CPU frequency to the maximum frequency supported by
+ * this CPU.
+ */
+int cpufreq_setmax(unsigned int cpu)
+{
+ if (!cpu_is_managed[cpu] || !cpu_online(cpu))
+ return -EINVAL;
+ return cpufreq_set(cpu_max_freq[cpu], cpu);
+}
+EXPORT_SYMBOL_GPL(cpufreq_setmax);
+
+
+/**
+ * cpufreq_get - get the current CPU frequency (in kHz)
+ * @cpu: CPU number
+ *
+ * Get the CPU current (static) CPU frequency
+ */
+unsigned int cpufreq_get(unsigned int cpu)
+{
+ return cpu_cur_freq[cpu];
+}
+EXPORT_SYMBOL(cpufreq_get);
+
+
+#ifdef CONFIG_CPU_FREQ_24_API
+
+
+/*********************** cpufreq_sysctl interface ********************/
+static int
+cpufreq_procctl(ctl_table *ctl, int write, struct file *filp,
+ void *buffer, size_t *lenp)
+{
+ char buf[16], *p;
+ int cpu = (int) ctl->extra1;
+ int len, left = *lenp;
+
+ if (!left || (filp->f_pos && !write) || !cpu_online(cpu)) {
+ *lenp = 0;
+ return 0;
+ }
+
+ if (write) {
+ unsigned int freq;
+
+ len = left;
+ if (left > sizeof(buf))
+ left = sizeof(buf);
+ if (copy_from_user(buf, buffer, left))
+ return -EFAULT;
+ buf[sizeof(buf) - 1] = '\0';
+
+ freq = simple_strtoul(buf, &p, 0);
+ cpufreq_set(freq, cpu);
+ } else {
+ len = sprintf(buf, "%d\n", cpufreq_get(cpu));
+ if (len > left)
+ len = left;
+ if (copy_to_user(buffer, buf, len))
+ return -EFAULT;
+ }
+
+ *lenp = len;
+ filp->f_pos += len;
+ return 0;
+}
+
+static int
+cpufreq_sysctl(ctl_table *table, int *name, int nlen,
+ void *oldval, size_t *oldlenp,
+ void *newval, size_t newlen, void **context)
+{
+ int cpu = (int) table->extra1;
+
+ if (!cpu_online(cpu))
+ return -EINVAL;
+
+ if (oldval && oldlenp) {
+ size_t oldlen;
+
+ if (get_user(oldlen, oldlenp))
+ return -EFAULT;
+
+ if (oldlen != sizeof(unsigned int))
+ return -EINVAL;
+
+ if (put_user(cpufreq_get(cpu), (unsigned int *)oldval) ||
+ put_user(sizeof(unsigned int), oldlenp))
+ return -EFAULT;
+ }
+ if (newval && newlen) {
+ unsigned int freq;
+
+ if (newlen != sizeof(unsigned int))
+ return -EINVAL;
+
+ if (get_user(freq, (unsigned int *)newval))
+ return -EFAULT;
+
+ cpufreq_set(freq, cpu);
+ }
+ return 1;
+}
+
+/* ctl_table ctl_cpu_vars_{0,1,...,(NR_CPUS-1)} */
+/* due to NR_CPUS tweaking, a lot of if/endifs are required, sorry */
+ CTL_TABLE_CPU_VARS(0);
+#if NR_CPUS > 1
+ CTL_TABLE_CPU_VARS(1);
+#endif
+#if NR_CPUS > 2
+ CTL_TABLE_CPU_VARS(2);
+#endif
+#if NR_CPUS > 3
+ CTL_TABLE_CPU_VARS(3);
+#endif
+#if NR_CPUS > 4
+ CTL_TABLE_CPU_VARS(4);
+#endif
+#if NR_CPUS > 5
+ CTL_TABLE_CPU_VARS(5);
+#endif
+#if NR_CPUS > 6
+ CTL_TABLE_CPU_VARS(6);
+#endif
+#if NR_CPUS > 7
+ CTL_TABLE_CPU_VARS(7);
+#endif
+#if NR_CPUS > 8
+ CTL_TABLE_CPU_VARS(8);
+#endif
+#if NR_CPUS > 9
+ CTL_TABLE_CPU_VARS(9);
+#endif
+#if NR_CPUS > 10
+ CTL_TABLE_CPU_VARS(10);
+#endif
+#if NR_CPUS > 11
+ CTL_TABLE_CPU_VARS(11);
+#endif
+#if NR_CPUS > 12
+ CTL_TABLE_CPU_VARS(12);
+#endif
+#if NR_CPUS > 13
+ CTL_TABLE_CPU_VARS(13);
+#endif
+#if NR_CPUS > 14
+ CTL_TABLE_CPU_VARS(14);
+#endif
+#if NR_CPUS > 15
+ CTL_TABLE_CPU_VARS(15);
+#endif
+#if NR_CPUS > 16
+ CTL_TABLE_CPU_VARS(16);
+#endif
+#if NR_CPUS > 17
+ CTL_TABLE_CPU_VARS(17);
+#endif
+#if NR_CPUS > 18
+ CTL_TABLE_CPU_VARS(18);
+#endif
+#if NR_CPUS > 19
+ CTL_TABLE_CPU_VARS(19);
+#endif
+#if NR_CPUS > 20
+ CTL_TABLE_CPU_VARS(20);
+#endif
+#if NR_CPUS > 21
+ CTL_TABLE_CPU_VARS(21);
+#endif
+#if NR_CPUS > 22
+ CTL_TABLE_CPU_VARS(22);
+#endif
+#if NR_CPUS > 23
+ CTL_TABLE_CPU_VARS(23);
+#endif
+#if NR_CPUS > 24
+ CTL_TABLE_CPU_VARS(24);
+#endif
+#if NR_CPUS > 25
+ CTL_TABLE_CPU_VARS(25);
+#endif
+#if NR_CPUS > 26
+ CTL_TABLE_CPU_VARS(26);
+#endif
+#if NR_CPUS > 27
+ CTL_TABLE_CPU_VARS(27);
+#endif
+#if NR_CPUS > 28
+ CTL_TABLE_CPU_VARS(28);
+#endif
+#if NR_CPUS > 29
+ CTL_TABLE_CPU_VARS(29);
+#endif
+#if NR_CPUS > 30
+ CTL_TABLE_CPU_VARS(30);
+#endif
+#if NR_CPUS > 31
+ CTL_TABLE_CPU_VARS(31);
+#endif
+#if NR_CPUS > 32
+#error please extend CPU enumeration
+#endif
+
+/* due to NR_CPUS tweaking, a lot of if/endifs are required, sorry */
+static ctl_table ctl_cpu_table[NR_CPUS + 1] = {
+ CPU_ENUM(0),
+#if NR_CPUS > 1
+ CPU_ENUM(1),
+#endif
+#if NR_CPUS > 2
+ CPU_ENUM(2),
+#endif
+#if NR_CPUS > 3
+ CPU_ENUM(3),
+#endif
+#if NR_CPUS > 4
+ CPU_ENUM(4),
+#endif
+#if NR_CPUS > 5
+ CPU_ENUM(5),
+#endif
+#if NR_CPUS > 6
+ CPU_ENUM(6),
+#endif
+#if NR_CPUS > 7
+ CPU_ENUM(7),
+#endif
+#if NR_CPUS > 8
+ CPU_ENUM(8),
+#endif
+#if NR_CPUS > 9
+ CPU_ENUM(9),
+#endif
+#if NR_CPUS > 10
+ CPU_ENUM(10),
+#endif
+#if NR_CPUS > 11
+ CPU_ENUM(11),
+#endif
+#if NR_CPUS > 12
+ CPU_ENUM(12),
+#endif
+#if NR_CPUS > 13
+ CPU_ENUM(13),
+#endif
+#if NR_CPUS > 14
+ CPU_ENUM(14),
+#endif
+#if NR_CPUS > 15
+ CPU_ENUM(15),
+#endif
+#if NR_CPUS > 16
+ CPU_ENUM(16),
+#endif
+#if NR_CPUS > 17
+ CPU_ENUM(17),
+#endif
+#if NR_CPUS > 18
+ CPU_ENUM(18),
+#endif
+#if NR_CPUS > 19
+ CPU_ENUM(19),
+#endif
+#if NR_CPUS > 20
+ CPU_ENUM(20),
+#endif
+#if NR_CPUS > 21
+ CPU_ENUM(21),
+#endif
+#if NR_CPUS > 22
+ CPU_ENUM(22),
+#endif
+#if NR_CPUS > 23
+ CPU_ENUM(23),
+#endif
+#if NR_CPUS > 24
+ CPU_ENUM(24),
+#endif
+#if NR_CPUS > 25
+ CPU_ENUM(25),
+#endif
+#if NR_CPUS > 26
+ CPU_ENUM(26),
+#endif
+#if NR_CPUS > 27
+ CPU_ENUM(27),
+#endif
+#if NR_CPUS > 28
+ CPU_ENUM(28),
+#endif
+#if NR_CPUS > 29
+ CPU_ENUM(29),
+#endif
+#if NR_CPUS > 30
+ CPU_ENUM(30),
+#endif
+#if NR_CPUS > 31
+ CPU_ENUM(31),
+#endif
+#if NR_CPUS > 32
+#error please extend CPU enumeration
+#endif
+ {
+ .ctl_name = 0,
+ }
+};
+
+static ctl_table ctl_cpu[2] = {
+ {
+ .ctl_name = CTL_CPU,
+ .procname = "cpu",
+ .mode = 0555,
+ .child = ctl_cpu_table,
+ },
+ {
+ .ctl_name = 0,
+ }
+};
+
+struct ctl_table_header *cpufreq_sysctl_table;
+
+static inline void cpufreq_sysctl_init(void)
+{
+ cpufreq_sysctl_table = register_sysctl_table(ctl_cpu, 0);
+}
+
+static inline void cpufreq_sysctl_exit(void)
+{
+ unregister_sysctl_table(cpufreq_sysctl_table);
+}
+
+#else
+#define cpufreq_sysctl_init() do {} while(0)
+#define cpufreq_sysctl_exit() do {} while(0)
+#endif /* CONFIG_CPU_FREQ_24API */
+
+
+/************************** sysfs interface ************************/
+static inline int to_cpu_nr (struct device *dev)
+{
+ struct sys_device * cpu_sys_dev = container_of(dev, struct sys_device, dev);
+ return (cpu_sys_dev->id);
+}
+
+static ssize_t show_speed (struct device *dev, char *buf)
+{
+ unsigned int cpu = to_cpu_nr(dev);
+
+ return sprintf (buf, "%u\n", cpu_cur_freq[cpu]);
+}
+
+static ssize_t
+store_speed (struct device *dev, const char *buf, size_t count)
+{
+ unsigned int cpu = to_cpu_nr(dev);
+ unsigned int freq = 0;
+ unsigned int ret;
+
+ ret = sscanf (buf, "%u", &freq);
+ if (ret != 1)
+ return -EINVAL;
+
+ cpufreq_set(freq, cpu);
+
+ return count;
+}
+
+static DEVICE_ATTR(scaling_setspeed, (S_IRUGO | S_IWUSR), show_speed, store_speed);
+
+
+static int cpufreq_governor_userspace(struct cpufreq_policy *policy,
+ unsigned int event)
+{
+ unsigned int cpu = policy->cpu;
+ switch (event) {
+ case CPUFREQ_GOV_START:
+ if ((!cpu_online(cpu)) || (!try_module_get(THIS_MODULE)) ||
+ !policy->cur)
+ return -EINVAL;
+ down(&userspace_sem);
+ cpu_is_managed[cpu] = 1;
+ cpu_min_freq[cpu] = policy->min;
+ cpu_max_freq[cpu] = policy->max;
+ cpu_cur_freq[cpu] = policy->cur;
+ device_create_file (policy->intf.dev, &dev_attr_scaling_setspeed);
+ memcpy (¤t_policy[cpu], policy, sizeof(struct cpufreq_policy));
+ up(&userspace_sem);
+ break;
+ case CPUFREQ_GOV_STOP:
+ down(&userspace_sem);
+ cpu_is_managed[cpu] = 0;
+ cpu_min_freq[cpu] = 0;
+ cpu_max_freq[cpu] = 0;
+ device_remove_file (policy->intf.dev, &dev_attr_scaling_setspeed);
+ up(&userspace_sem);
+ module_put(THIS_MODULE);
+ break;
+ case CPUFREQ_GOV_LIMITS:
+ down(&userspace_sem);
+ cpu_min_freq[cpu] = policy->min;
+ cpu_max_freq[cpu] = policy->max;
+ if (policy->max < cpu_cur_freq[cpu])
+ cpufreq_driver_target(¤t_policy[cpu], policy->max,
+ CPUFREQ_RELATION_H);
+ else if (policy->min > cpu_cur_freq[cpu])
+ cpufreq_driver_target(¤t_policy[cpu], policy->min,
+ CPUFREQ_RELATION_L);
+ memcpy (¤t_policy[cpu], policy, sizeof(struct cpufreq_policy));
+ up(&userspace_sem);
+ break;
+ }
+ return 0;
+}
+
+/* on ARM SA1100 we need to rely on the values of cpufreq_get() - because
+ * of this, cpu_cur_freq[] needs to be set early.
+ */
+#if defined(CONFIG_ARM) && defined(CONFIG_ARCH_SA1100)
+extern unsigned int sa11x0_getspeed(void);
+
+static void cpufreq_sa11x0_compat(void)
+{
+ cpu_cur_freq[0] = sa11x0_getspeed();
+}
+#else
+#define cpufreq_sa11x0_compat() do {} while(0)
+#endif
+
+
+static struct cpufreq_governor cpufreq_gov_userspace = {
+ .name = "userspace",
+ .governor = cpufreq_governor_userspace,
+ .owner = THIS_MODULE,
+};
+EXPORT_SYMBOL(cpufreq_gov_userspace);
+
+static int already_init = 0;
+
+int cpufreq_gov_userspace_init(void)
+{
+ if (!already_init) {
+ down(&userspace_sem);
+ cpufreq_sa11x0_compat();
+ cpufreq_sysctl_init();
+ cpufreq_register_notifier(&userspace_cpufreq_notifier_block, CPUFREQ_TRANSITION_NOTIFIER);
+ already_init = 1;
+ up(&userspace_sem);
+ }
+ return cpufreq_register_governor(&cpufreq_gov_userspace);
+}
+EXPORT_SYMBOL(cpufreq_gov_userspace_init);
+
+
+static void __exit cpufreq_gov_userspace_exit(void)
+{
+ cpufreq_unregister_governor(&cpufreq_gov_userspace);
+ cpufreq_unregister_notifier(&userspace_cpufreq_notifier_block, CPUFREQ_TRANSITION_NOTIFIER);
+ cpufreq_sysctl_exit();
+}
+
+
+MODULE_AUTHOR ("Dominik Brodowski , Russell King ");
+MODULE_DESCRIPTION ("CPUfreq policy governor 'userspace'");
+MODULE_LICENSE ("GPL");
+
+module_init(cpufreq_gov_userspace_init);
+module_exit(cpufreq_gov_userspace_exit);
diff -Nru a/drivers/hotplug/Makefile b/drivers/hotplug/Makefile
--- a/drivers/hotplug/Makefile Tue Mar 4 19:30:11 2003
+++ b/drivers/hotplug/Makefile Tue Mar 4 19:30:11 2003
@@ -18,7 +18,7 @@
cpqphp-objs := cpqphp_core.o \
cpqphp_ctrl.o \
- cpqphp_proc.o \
+ cpqphp_sysfs.o \
cpqphp_pci.o
ibmphp-objs := ibmphp_core.o \
diff -Nru a/drivers/hotplug/acpiphp_glue.c b/drivers/hotplug/acpiphp_glue.c
--- a/drivers/hotplug/acpiphp_glue.c Tue Mar 4 19:30:05 2003
+++ b/drivers/hotplug/acpiphp_glue.c Tue Mar 4 19:30:05 2003
@@ -229,136 +229,55 @@
/* decode ACPI _CRS data and convert into our internal resource list
* TBD: _TRA, etc.
*/
-static void
-decode_acpi_resource (struct acpi_resource *resource, struct acpiphp_bridge *bridge)
+static acpi_status
+decode_acpi_resource (struct acpi_resource *resource, void *context)
{
- struct acpi_resource_address16 *address16_data;
- struct acpi_resource_address32 *address32_data;
- struct acpi_resource_address64 *address64_data;
+ struct acpiphp_bridge *bridge = (struct acpiphp_bridge *) context;
+ struct acpi_resource_address64 address;
struct pci_resource *res;
- u32 resource_type, producer_consumer, address_length;
- u64 min_address_range, max_address_range;
- u16 cache_attribute = 0;
-
- int done = 0, found;
-
- /* shut up gcc */
- resource_type = producer_consumer = address_length = 0;
- min_address_range = max_address_range = 0;
-
- while (!done) {
- found = 0;
-
- switch (resource->id) {
- case ACPI_RSTYPE_ADDRESS16:
- address16_data = (struct acpi_resource_address16 *)&resource->data;
- resource_type = address16_data->resource_type;
- producer_consumer = address16_data->producer_consumer;
- min_address_range = address16_data->min_address_range;
- max_address_range = address16_data->max_address_range;
- address_length = address16_data->address_length;
- if (resource_type == ACPI_MEMORY_RANGE)
- cache_attribute = address16_data->attribute.memory.cache_attribute;
- found = 1;
- break;
+ if (resource->id != ACPI_RSTYPE_ADDRESS16 &&
+ resource->id != ACPI_RSTYPE_ADDRESS32 &&
+ resource->id != ACPI_RSTYPE_ADDRESS64)
+ return AE_OK;
- case ACPI_RSTYPE_ADDRESS32:
- address32_data = (struct acpi_resource_address32 *)&resource->data;
- resource_type = address32_data->resource_type;
- producer_consumer = address32_data->producer_consumer;
- min_address_range = address32_data->min_address_range;
- max_address_range = address32_data->max_address_range;
- address_length = address32_data->address_length;
- if (resource_type == ACPI_MEMORY_RANGE)
- cache_attribute = address32_data->attribute.memory.cache_attribute;
- found = 1;
- break;
+ acpi_resource_to_address64(resource, &address);
- case ACPI_RSTYPE_ADDRESS64:
- address64_data = (struct acpi_resource_address64 *)&resource->data;
- resource_type = address64_data->resource_type;
- producer_consumer = address64_data->producer_consumer;
- min_address_range = address64_data->min_address_range;
- max_address_range = address64_data->max_address_range;
- address_length = address64_data->address_length;
- if (resource_type == ACPI_MEMORY_RANGE)
- cache_attribute = address64_data->attribute.memory.cache_attribute;
- found = 1;
- break;
+ if (address.producer_consumer == ACPI_PRODUCER && address.address_length > 0) {
+ dbg("resource type: %d: 0x%llx - 0x%llx\n", address.resource_type, address.min_address_range, address.max_address_range);
+ res = acpiphp_make_resource(address.min_address_range,
+ address.address_length);
+ if (!res) {
+ err("out of memory\n");
+ return AE_OK;
+ }
- case ACPI_RSTYPE_END_TAG:
- done = 1;
+ switch (address.resource_type) {
+ case ACPI_MEMORY_RANGE:
+ if (address.attribute.memory.cache_attribute == ACPI_PREFETCHABLE_MEMORY) {
+ res->next = bridge->p_mem_head;
+ bridge->p_mem_head = res;
+ } else {
+ res->next = bridge->mem_head;
+ bridge->mem_head = res;
+ }
+ break;
+ case ACPI_IO_RANGE:
+ res->next = bridge->io_head;
+ bridge->io_head = res;
+ break;
+ case ACPI_BUS_NUMBER_RANGE:
+ res->next = bridge->bus_head;
+ bridge->bus_head = res;
break;
-
default:
- /* ignore */
+ /* invalid type */
+ kfree(res);
break;
}
-
- resource = (struct acpi_resource *)((char*)resource + resource->length);
-
- if (found && producer_consumer == ACPI_PRODUCER && address_length > 0) {
- switch (resource_type) {
- case ACPI_MEMORY_RANGE:
- if (cache_attribute == ACPI_PREFETCHABLE_MEMORY) {
- dbg("resource type: prefetchable memory 0x%x - 0x%x\n", (u32)min_address_range, (u32)max_address_range);
- res = acpiphp_make_resource(min_address_range,
- address_length);
- if (!res) {
- err("out of memory\n");
- return;
- }
- res->next = bridge->p_mem_head;
- bridge->p_mem_head = res;
- } else {
- dbg("resource type: memory 0x%x - 0x%x\n", (u32)min_address_range, (u32)max_address_range);
- res = acpiphp_make_resource(min_address_range,
- address_length);
- if (!res) {
- err("out of memory\n");
- return;
- }
- res->next = bridge->mem_head;
- bridge->mem_head = res;
- }
- break;
- case ACPI_IO_RANGE:
- dbg("resource type: io 0x%x - 0x%x\n", (u32)min_address_range, (u32)max_address_range);
- res = acpiphp_make_resource(min_address_range,
- address_length);
- if (!res) {
- err("out of memory\n");
- return;
- }
- res->next = bridge->io_head;
- bridge->io_head = res;
- break;
- case ACPI_BUS_NUMBER_RANGE:
- dbg("resource type: bus number %d - %d\n", (u32)min_address_range, (u32)max_address_range);
- res = acpiphp_make_resource(min_address_range,
- address_length);
- if (!res) {
- err("out of memory\n");
- return;
- }
- res->next = bridge->bus_head;
- bridge->bus_head = res;
- break;
- default:
- /* invalid type */
- break;
- }
- }
}
- acpiphp_resource_sort_and_combine(&bridge->io_head);
- acpiphp_resource_sort_and_combine(&bridge->mem_head);
- acpiphp_resource_sort_and_combine(&bridge->p_mem_head);
- acpiphp_resource_sort_and_combine(&bridge->bus_head);
-
- dbg("ACPI _CRS resource:\n");
- acpiphp_dump_resource(bridge);
+ return AE_OK;
}
@@ -476,9 +395,6 @@
static void add_host_bridge (acpi_handle *handle, int seg, int bus)
{
acpi_status status;
- struct acpi_buffer buffer = { .length = ACPI_ALLOCATE_BUFFER,
- .pointer = NULL};
-
struct acpiphp_bridge *bridge;
bridge = kmalloc(sizeof(struct acpiphp_bridge), GFP_KERNEL);
@@ -501,7 +417,8 @@
/* decode resources */
- status = acpi_get_current_resources(handle, &buffer);
+ status = acpi_walk_resources(handle, METHOD_NAME__CRS,
+ decode_acpi_resource, bridge);
if (ACPI_FAILURE(status)) {
err("failed to decode bridge resources\n");
@@ -509,8 +426,13 @@
return;
}
- decode_acpi_resource(buffer.pointer, bridge);
- kfree(buffer.pointer);
+ acpiphp_resource_sort_and_combine(&bridge->io_head);
+ acpiphp_resource_sort_and_combine(&bridge->mem_head);
+ acpiphp_resource_sort_and_combine(&bridge->p_mem_head);
+ acpiphp_resource_sort_and_combine(&bridge->bus_head);
+
+ dbg("ACPI _CRS resource:\n");
+ acpiphp_dump_resource(bridge);
if (bridge->bus_head) {
bridge->bus = bridge->bus_head->base;
@@ -1357,7 +1279,7 @@
if (sta != ACPI_STA_ALL) {
retval = acpiphp_disable_slot(slot);
if (retval) {
- err("Error occured in enabling\n");
+ err("Error occurred in enabling\n");
up(&slot->crit_sect);
goto err_exit;
}
@@ -1368,7 +1290,7 @@
if (sta == ACPI_STA_ALL) {
retval = acpiphp_enable_slot(slot);
if (retval) {
- err("Error occured in enabling\n");
+ err("Error occurred in enabling\n");
up(&slot->crit_sect);
goto err_exit;
}
diff -Nru a/drivers/hotplug/acpiphp_pci.c b/drivers/hotplug/acpiphp_pci.c
--- a/drivers/hotplug/acpiphp_pci.c Tue Mar 4 19:30:03 2003
+++ b/drivers/hotplug/acpiphp_pci.c Tue Mar 4 19:30:03 2003
@@ -194,133 +194,6 @@
return 0;
}
-
-/* enable pci_dev */
-static int configure_pci_dev (struct pci_dev_wrapped *wrapped_dev, struct pci_bus_wrapped *wrapped_bus)
-{
- struct acpiphp_func *func;
- struct acpiphp_bridge *bridge;
- struct pci_dev *dev;
-
- func = (struct acpiphp_func *)wrapped_dev->data;
- bridge = (struct acpiphp_bridge *)wrapped_bus->data;
- dev = wrapped_dev->dev;
-
- /* TBD: support PCI-to-PCI bridge case */
- if (!func || !bridge)
- return 0;
-
- //pci_proc_attach_device(dev);
- //pci_announce_device_to_drivers(dev);
- info("Device %s configured\n", dev->slot_name);
-
- return 0;
-}
-
-
-static int is_pci_dev_in_use (struct pci_dev* dev)
-{
- /*
- * dev->driver will be set if the device is in use by a new-style
- * driver -- otherwise, check the device's regions to see if any
- * driver has claimed them
- */
-
- int i, inuse=0;
-
- if (dev->driver) return 1; //assume driver feels responsible
-
- for (i = 0; !dev->driver && !inuse && (i < 6); i++) {
- if (!pci_resource_start(dev, i))
- continue;
-
- if (pci_resource_flags(dev, i) & IORESOURCE_IO)
- inuse = check_region(pci_resource_start(dev, i),
- pci_resource_len(dev, i));
- else if (pci_resource_flags(dev, i) & IORESOURCE_MEM)
- inuse = check_mem_region(pci_resource_start(dev, i),
- pci_resource_len(dev, i));
- }
-
- return inuse;
-}
-
-
-static int pci_hp_remove_device (struct pci_dev *dev)
-{
- if (is_pci_dev_in_use(dev)) {
- err("***Cannot safely power down device -- "
- "it appears to be in use***\n");
- return -EBUSY;
- }
- pci_remove_device(dev);
- return 0;
-}
-
-
-/* remove device driver */
-static int unconfigure_pci_dev_driver (struct pci_dev_wrapped *wrapped_dev, struct pci_bus_wrapped *wrapped_bus)
-{
- struct pci_dev *dev = wrapped_dev->dev;
-
- dbg("attempting removal of driver for device %s\n", dev->slot_name);
-
- /* Now, remove the Linux Driver Representation */
- if (dev->driver) {
- if (dev->driver->remove) {
- dev->driver->remove(dev);
- dbg("driver was properly removed\n");
- }
- dev->driver = NULL;
- }
-
- return is_pci_dev_in_use(dev);
-}
-
-
-/* remove pci_dev itself from system */
-static int unconfigure_pci_dev (struct pci_dev_wrapped *wrapped_dev, struct pci_bus_wrapped *wrapped_bus)
-{
- struct pci_dev *dev = wrapped_dev->dev;
-
- /* Now, remove the Linux Representation */
- if (dev) {
- if (pci_hp_remove_device(dev) == 0) {
- info("Device %s removed\n", dev->slot_name);
- kfree(dev); /* Now, remove */
- } else {
- return -1; /* problems while freeing, abort visitation */
- }
- }
-
- return 0;
-}
-
-
-/* remove pci_bus itself from system */
-static int unconfigure_pci_bus (struct pci_bus_wrapped *wrapped_bus, struct pci_dev_wrapped *wrapped_dev)
-{
- struct pci_bus *bus = wrapped_bus->bus;
-
-#ifdef CONFIG_PROC_FS
- /* Now, remove the Linux Representation */
- if (bus->procdir) {
- pci_proc_detach_bus(bus);
- }
-#endif
- /* the cleanup code should live in the kernel ... */
- bus->self->subordinate = NULL;
- /* unlink from parent bus */
- list_del(&bus->node);
-
- /* Now, remove */
- if (bus)
- kfree(bus);
-
- return 0;
-}
-
-
/* detect_used_resource - subtract resource under dev from bridge */
static int detect_used_resource (struct acpiphp_bridge *bridge, struct pci_dev *dev)
{
@@ -592,22 +465,6 @@
return retval;
}
-
-/* for pci_visit_dev() */
-static struct pci_visit configure_functions = {
- .post_visit_pci_dev = configure_pci_dev
-};
-
-static struct pci_visit unconfigure_functions_phase1 = {
- .post_visit_pci_dev = unconfigure_pci_dev_driver
-};
-
-static struct pci_visit unconfigure_functions_phase2 = {
- .post_visit_pci_bus = unconfigure_pci_bus,
- .post_visit_pci_dev = unconfigure_pci_dev
-};
-
-
/**
* acpiphp_configure_function - configure PCI function
* @func: function to be configured
@@ -618,33 +475,10 @@
*/
int acpiphp_configure_function (struct acpiphp_func *func)
{
- int retval = 0;
- struct pci_dev_wrapped wrapped_dev;
- struct pci_bus_wrapped wrapped_bus;
- struct acpiphp_bridge *bridge;
-
- /* if pci_dev is NULL, ignore it */
- if (!func->pci_dev)
- goto err_exit;
-
- bridge = func->slot->bridge;
-
- memset(&wrapped_dev, 0, sizeof(struct pci_dev_wrapped));
- memset(&wrapped_bus, 0, sizeof(struct pci_bus_wrapped));
- wrapped_dev.dev = func->pci_dev;
- wrapped_dev.data = func;
- wrapped_bus.bus = bridge->pci_bus;
- wrapped_bus.data = bridge;
-
- retval = pci_visit_dev(&configure_functions, &wrapped_dev, &wrapped_bus);
- if (retval)
- goto err_exit;
-
- err_exit:
- return retval;
+ /* all handled by the pci core now */
+ return 0;
}
-
/**
* acpiphp_unconfigure_function - unconfigure PCI function
* @func: function to be unconfigured
@@ -653,28 +487,13 @@
int acpiphp_unconfigure_function (struct acpiphp_func *func)
{
struct acpiphp_bridge *bridge;
- struct pci_dev_wrapped wrapped_dev;
- struct pci_bus_wrapped wrapped_bus;
int retval = 0;
/* if pci_dev is NULL, ignore it */
if (!func->pci_dev)
goto err_exit;
- memset(&wrapped_dev, 0, sizeof(struct pci_dev_wrapped));
- memset(&wrapped_bus, 0, sizeof(struct pci_bus_wrapped));
- wrapped_dev.dev = func->pci_dev;
- //wrapped_dev.data = func;
- wrapped_bus.bus = func->slot->bridge->pci_bus;
- //wrapped_bus.data = func->slot->bridge;
-
- retval = pci_visit_dev(&unconfigure_functions_phase1, &wrapped_dev, &wrapped_bus);
- if (retval)
- goto err_exit;
-
- retval = pci_visit_dev(&unconfigure_functions_phase2, &wrapped_dev, &wrapped_bus);
- if (retval)
- goto err_exit;
+ pci_remove_bus_device(func->pci_dev);
/* free all resources */
bridge = func->slot->bridge;
diff -Nru a/drivers/hotplug/cpci_hotplug_pci.c b/drivers/hotplug/cpci_hotplug_pci.c
--- a/drivers/hotplug/cpci_hotplug_pci.c Tue Mar 4 19:30:11 2003
+++ b/drivers/hotplug/cpci_hotplug_pci.c Tue Mar 4 19:30:11 2003
@@ -483,29 +483,6 @@
return 0;
}
-static int unconfigure_visit_pci_dev_phase1(struct pci_dev_wrapped *wrapped_dev,
- struct pci_bus_wrapped *wrapped_bus)
-{
- struct pci_dev *dev = wrapped_dev->dev;
-
- dbg("%s - enter", __FUNCTION__);
-
- dbg("attempting removal of driver for device %02x:%02x.%x",
- dev->bus->number, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn));
-
- /* Now, remove the Linux Driver representation */
- if(dev->driver) {
- dbg("device is attached to a driver");
- if(dev->driver->remove) {
- dev->driver->remove(dev);
- dbg("driver was removed");
- }
- dev->driver = NULL;
- }
- dbg("%s - exit", __FUNCTION__);
- return pci_is_dev_in_use(dev);
-}
-
static int unconfigure_visit_pci_dev_phase2(struct pci_dev_wrapped *wrapped_dev,
struct pci_bus_wrapped *wrapped_bus)
{
@@ -577,10 +554,6 @@
.visit_pci_dev = configure_visit_pci_dev,
};
-static struct pci_visit unconfigure_functions_phase1 = {
- .post_visit_pci_dev = unconfigure_visit_pci_dev_phase1
-};
-
static struct pci_visit unconfigure_functions_phase2 = {
.post_visit_pci_bus = unconfigure_visit_pci_bus_phase2,
.post_visit_pci_dev = unconfigure_visit_pci_dev_phase2
@@ -668,13 +641,6 @@
if(dev) {
wrapped_dev.dev = dev;
wrapped_bus.bus = dev->bus;
- dbg("%s - unconfigure phase 1", __FUNCTION__);
- rc = pci_visit_dev(&unconfigure_functions_phase1,
- &wrapped_dev, &wrapped_bus);
- if(rc) {
- break;
- }
-
dbg("%s - unconfigure phase 2", __FUNCTION__);
rc = pci_visit_dev(&unconfigure_functions_phase2,
&wrapped_dev, &wrapped_bus);
diff -Nru a/drivers/hotplug/cpqphp.h b/drivers/hotplug/cpqphp.h
--- a/drivers/hotplug/cpqphp.h Tue Mar 4 19:30:13 2003
+++ b/drivers/hotplug/cpqphp.h Tue Mar 4 19:30:13 2003
@@ -403,31 +403,8 @@
#define msg_button_ignore "PCI slot #%d - button press ignored. (action in progress...)\n"
-/* Proc functions for the hotplug controller info */
-#ifdef CONFIG_PROC_FS
-extern int cpqhp_proc_init_ctrl (void);
-extern int cpqhp_proc_destroy_ctrl (void);
-extern int cpqhp_proc_create_ctrl (struct controller *ctrl);
-extern int cpqhp_proc_remove_ctrl (struct controller *ctrl);
-#else
-static inline int cpqhp_proc_init_ctrl (void)
-{
- return 0;
-}
-static inline int cpqhp_proc_destroy_ctrl (void)
-{
- return 0;
-}
-static inline int cpqhp_proc_create_ctrl (struct controller *ctrl)
-{
- return 0;
-}
-static inline int cpqhp_proc_remove_ctrl (struct controller *ctrl)
-{
- return 0;
-}
-#endif
-
+/* sysfs functions for the hotplug controller info */
+extern void cpqhp_create_ctrl_files (struct controller *ctrl);
/* controller functions */
extern void cpqhp_pushbutton_thread (unsigned long event_pointer);
diff -Nru a/drivers/hotplug/cpqphp_core.c b/drivers/hotplug/cpqphp_core.c
--- a/drivers/hotplug/cpqphp_core.c Tue Mar 4 19:30:03 2003
+++ b/drivers/hotplug/cpqphp_core.c Tue Mar 4 19:30:03 2003
@@ -1038,6 +1038,7 @@
dbg (" pcix_support %s\n", ctrl->pcix_support == 0 ? "not supported" : "supported");
ctrl->pci_dev = pdev;
+ pci_set_drvdata(pdev, ctrl);
/* make our own copy of the pci bus structure, as we like tweaking it a lot */
ctrl->pci_bus = kmalloc (sizeof (*ctrl->pci_bus), GFP_KERNEL);
@@ -1231,11 +1232,7 @@
// Done with exclusive hardware access
up(&ctrl->crit_sect);
- rc = cpqhp_proc_create_ctrl (ctrl);
- if (rc) {
- err("cpqhp_proc_create_ctrl failed\n");
- goto err_free_irq;
- }
+ cpqhp_create_ctrl_files (ctrl);
return 0;
@@ -1309,10 +1306,6 @@
goto error;
}
- retval = cpqhp_proc_init_ctrl();
- if (retval)
- goto error;
-
initialized = 1;
return retval;
@@ -1343,8 +1336,6 @@
ctrl = cpqhp_ctrl_list;
while (ctrl) {
- cpqhp_proc_remove_ctrl (ctrl);
-
if (ctrl->hpc_reg) {
u16 misc;
rc = read_slot_enable (ctrl);
@@ -1431,8 +1422,6 @@
}
}
- remove_proc_entry("hpc", 0);
-
// Stop the notification mechanism
cpqhp_event_stop_thread();
@@ -1490,9 +1479,6 @@
static void __exit cpqhpc_cleanup(void)
{
- dbg("cleaning up proc entries\n");
- cpqhp_proc_destroy_ctrl();
-
dbg("unload_cpqphpd()\n");
unload_cpqphpd();
diff -Nru a/drivers/hotplug/cpqphp_pci.c b/drivers/hotplug/cpqphp_pci.c
--- a/drivers/hotplug/cpqphp_pci.c Tue Mar 4 19:30:13 2003
+++ b/drivers/hotplug/cpqphp_pci.c Tue Mar 4 19:30:13 2003
@@ -44,48 +44,6 @@
static u16 unused_IRQ;
-
-static int is_pci_dev_in_use(struct pci_dev* dev)
-{
- /*
- * dev->driver will be set if the device is in use by a new-style
- * driver -- otherwise, check the device's regions to see if any
- * driver has claimed them
- */
-
- int i, inuse=0;
-
- if (dev->driver) return 1; //assume driver feels responsible
-
- for (i = 0; !dev->driver && !inuse && (i < 6); i++) {
- if (!pci_resource_start(dev, i))
- continue;
-
- if (pci_resource_flags(dev, i) & IORESOURCE_IO)
- inuse = check_region(pci_resource_start(dev, i),
- pci_resource_len(dev, i));
- else if (pci_resource_flags(dev, i) & IORESOURCE_MEM)
- inuse = check_mem_region(pci_resource_start(dev, i),
- pci_resource_len(dev, i));
- }
-
- return inuse;
-
-}
-
-
-static int pci_hp_remove_device(struct pci_dev *dev)
-{
- if (is_pci_dev_in_use(dev)) {
- err("***Cannot safely power down device -- "
- "it appears to be in use***\n");
- return -EBUSY;
- }
- pci_remove_device(dev);
- return 0;
-}
-
-
/*
* detect_HRT_floating_pointer
*
@@ -122,144 +80,14 @@
return fp;
}
-static int configure_visit_pci_dev (struct pci_dev_wrapped *wrapped_dev, struct pci_bus_wrapped *wrapped_bus)
-{
- struct pci_bus* bus = wrapped_bus->bus;
- struct pci_dev* dev = wrapped_dev->dev;
- struct pci_func *temp_func;
- int i=0;
-
- //We need to fix up the hotplug function representation with the linux representation
- do {
- temp_func = cpqhp_slot_find(dev->bus->number, dev->devfn >> 3, i++);
- } while (temp_func && (temp_func->function != (dev->devfn & 0x07)));
-
- if (temp_func) {
- temp_func->pci_dev = dev;
- } else {
- //We did not even find a hotplug rep of the function, create it
- //This code might be taken out if we can guarantee the creation of functions
- //in parallel (hotplug and Linux at the same time).
- dbg("@@@@@@@@@@@ cpqhp_slot_create in %s\n", __FUNCTION__);
- temp_func = cpqhp_slot_create(bus->number);
- if (temp_func == NULL)
- return -ENOMEM;
- temp_func->pci_dev = dev;
- }
-
- //Create /proc/bus/pci proc entry for this device and bus device is on
- //Notify the drivers of the change
- if (temp_func->pci_dev) {
-// pci_insert_device (temp_func->pci_dev, bus);
-// pci_proc_attach_device(temp_func->pci_dev);
-// pci_announce_device_to_drivers(temp_func->pci_dev);
- }
-
- return 0;
-}
-
-
-static int unconfigure_visit_pci_dev_phase2 (struct pci_dev_wrapped *wrapped_dev, struct pci_bus_wrapped *wrapped_bus)
-{
- struct pci_dev* dev = wrapped_dev->dev;
-
- struct pci_func *temp_func;
- int i=0;
-
- //We need to remove the hotplug function representation with the linux representation
- do {
- temp_func = cpqhp_slot_find(dev->bus->number, dev->devfn >> 3, i++);
- if (temp_func) {
- dbg("temp_func->function = %d\n", temp_func->function);
- }
- } while (temp_func && (temp_func->function != (dev->devfn & 0x07)));
-
- //Now, remove the Linux Representation
- if (dev) {
- if (pci_hp_remove_device(dev) == 0) {
- kfree(dev); //Now, remove
- } else {
- return -1; // problems while freeing, abort visitation
- }
- }
-
- if (temp_func) {
- temp_func->pci_dev = NULL;
- } else {
- dbg("No pci_func representation for bus, devfn = %d, %x\n", dev->bus->number, dev->devfn);
- }
-
- return 0;
-}
-
-
-static int unconfigure_visit_pci_bus_phase2 (struct pci_bus_wrapped *wrapped_bus, struct pci_dev_wrapped *wrapped_dev)
-{
- struct pci_bus* bus = wrapped_bus->bus;
-
- //The cleanup code for proc entries regarding buses should be in the kernel...
- if (bus->procdir)
- dbg("detach_pci_bus %s\n", bus->procdir->name);
- pci_proc_detach_bus(bus);
- // The cleanup code should live in the kernel...
- bus->self->subordinate = NULL;
- // unlink from parent bus
- list_del(&bus->node);
-
- // Now, remove
- if (bus)
- kfree(bus);
-
- return 0;
-}
-
-
-static int unconfigure_visit_pci_dev_phase1 (struct pci_dev_wrapped *wrapped_dev, struct pci_bus_wrapped *wrapped_bus)
-{
- struct pci_dev* dev = wrapped_dev->dev;
-
- dbg("attempting removal of driver for device (%x, %x, %x)\n", dev->bus->number, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn));
- //Now, remove the Linux Driver Representation
- if (dev->driver) {
- if (dev->driver->remove) {
- dev->driver->remove(dev);
- dbg("driver was properly removed\n");
- }
- dev->driver = NULL;
- }
-
- return is_pci_dev_in_use(dev);
-}
-
-
-static struct pci_visit configure_functions = {
- .visit_pci_dev = configure_visit_pci_dev,
-};
-
-
-static struct pci_visit unconfigure_functions_phase1 = {
- .post_visit_pci_dev = unconfigure_visit_pci_dev_phase1
-};
-
-static struct pci_visit unconfigure_functions_phase2 = {
- .post_visit_pci_bus = unconfigure_visit_pci_bus_phase2,
- .post_visit_pci_dev = unconfigure_visit_pci_dev_phase2
-};
-
int cpqhp_configure_device (struct controller* ctrl, struct pci_func* func)
{
unsigned char bus;
struct pci_dev dev0;
struct pci_bus *child;
- struct pci_dev* temp;
int rc = 0;
- struct pci_dev_wrapped wrapped_dev;
- struct pci_bus_wrapped wrapped_bus;
- memset(&wrapped_dev, 0, sizeof(struct pci_dev_wrapped));
- memset(&wrapped_bus, 0, sizeof(struct pci_bus_wrapped));
-
memset(&dev0, 0, sizeof(struct pci_dev));
if (func->pci_dev == NULL)
@@ -287,44 +115,22 @@
}
- temp = func->pci_dev;
-
- if (temp) {
- wrapped_dev.dev = temp;
- wrapped_bus.bus = temp->bus;
- rc = pci_visit_dev(&configure_functions, &wrapped_dev, &wrapped_bus);
- }
return rc;
}
int cpqhp_unconfigure_device(struct pci_func* func)
{
- int rc = 0;
int j;
- struct pci_dev_wrapped wrapped_dev;
- struct pci_bus_wrapped wrapped_bus;
- memset(&wrapped_dev, 0, sizeof(struct pci_dev_wrapped));
- memset(&wrapped_bus, 0, sizeof(struct pci_bus_wrapped));
-
dbg("%s: bus/dev/func = %x/%x/%x\n", __FUNCTION__, func->bus, func->device, func->function);
for (j=0; j<8 ; j++) {
struct pci_dev* temp = pci_find_slot(func->bus, (func->device << 3) | j);
- if (temp) {
- wrapped_dev.dev = temp;
- wrapped_bus.bus = temp->bus;
- rc = pci_visit_dev(&unconfigure_functions_phase1, &wrapped_dev, &wrapped_bus);
- if (rc)
- break;
-
- rc = pci_visit_dev(&unconfigure_functions_phase2, &wrapped_dev, &wrapped_bus);
- if (rc)
- break;
- }
+ if (temp)
+ pci_remove_bus_device(temp);
}
- return rc;
+ return 0;
}
static int PCI_RefinedAccessConfig(struct pci_bus *bus, unsigned int devfn, u8 offset, u32 *value)
diff -Nru a/drivers/hotplug/cpqphp_proc.c b/drivers/hotplug/cpqphp_proc.c
--- a/drivers/hotplug/cpqphp_proc.c Tue Mar 4 19:30:13 2003
+++ /dev/null Wed Dec 31 16:00:00 1969
@@ -1,197 +0,0 @@
-/*
- * Compaq Hot Plug Controller Driver
- *
- * Copyright (c) 1995,2001 Compaq Computer Corporation
- * Copyright (c) 2001 Greg Kroah-Hartman (greg@kroah.com)
- * Copyright (c) 2001 IBM Corp.
- *
- * All rights reserved.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or (at
- * your option) any later version.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
- * NON INFRINGEMENT. See the GNU General Public License for more
- * details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
- *
- * Send feedback to
- *
- */
-
-#include
-#include
-#include
-#include
-#include
-#include
-#include