diff -urpN linux-2.4.9-linus/Documentation/cachetlb.txt linux-2.4.9-larpage/Documentation/cachetlb.txt --- linux-2.4.9-linus/Documentation/cachetlb.txt 2001-03-25 18:14:20.000000000 -0800 +++ linux-2.4.9-larpage/Documentation/cachetlb.txt 2002-11-20 02:02:13.000000000 -0800 @@ -68,9 +68,19 @@ changes occur: call flush_tlb_page (see below) for each entry which may be modified. -4) void flush_tlb_page(struct vm_area_struct *vma, unsigned long page) +3k) void flush_tlb_range_k(unsigned long start, unsigned long end) - This time we need to remove the PAGE_SIZE sized translation + Here we are flushing a specific range of kernel virtual + address translations from the TLB. After running, this + interface must make sure that any previous page table + modifications in the range 'start' to 'end', including those + to "global" translations, are visible on this and other cpus. + + Primarily, this is for use by vfree(). + +4) void flush_tlb_page(struct vm_area_struct *vma, unsigned long address) + + This time we need to remove the MMUPAGE_SIZE sized translation from the TLB. The 'vma' is the backing structure used by Linux to keep track of mmap'd regions for a process, the address space is available via vma->vm_mm. Also, one may @@ -84,7 +94,9 @@ changes occur: is, after running, there will be no entries in the TLB for 'vma->vm_mm' for virtual address 'page'. - This is used primarily during fault processing. + Primarily, this is used in fault processing; but if large + kernel pages (PAGE_SIZE a multiple of MMUPAGE_SIZE) are + enabled, then flush_tlb_range() is generally preferred. 5) void flush_tlb_pgtables(struct mm_struct *mm, unsigned long start, unsigned long end) @@ -122,6 +134,10 @@ changes occur: translations for software managed TLB configurations. The sparc64 port currently does this. + update_mmu_cache() is an empty macro on i386, and large kernel + pages (PAGE_SIZE a multiple of MMUPAGE_SIZE) have not yet been + implemented on other architectures: it may need to be replaced. + Next, we have the cache flushing interfaces. In general, when Linux is changing an existing virtual-->physical mapping to a new value, the sequence will be in one of the following forms: @@ -134,9 +150,9 @@ the sequence will be in one of the follo change_range_of_page_tables(mm, start, end); flush_tlb_range(mm, start, end); - 3) flush_cache_page(vma, page); + 3) flush_cache_page(vma, address); set_pte(pte_pointer, new_pte_val); - flush_tlb_page(vma, page); + flush_tlb_page(vma, address); The cache level flush will always be first, because this allows us to properly handle systems whose caches are strict and require @@ -189,7 +205,7 @@ Here are the routines, one by one: call flush_cache_page (see below) for each entry which may be modified. -4) void flush_cache_page(struct vm_area_struct *vma, unsigned long page) +4) void flush_cache_page(struct vm_area_struct *vma, unsigned long address) This time we need to remove a PAGE_SIZE sized range from the cache. The 'vma' is the backing structure used by @@ -204,6 +220,10 @@ Here are the routines, one by one: This is used primarily during fault processing. + flush_cache_page() is an empty macro on i386, and large kernel + pages (PAGE_SIZE a multiple of MMUPAGE_SIZE) have not yet been + implemented on other architectures: it may need to be replaced. + There exists another whole class of cpu cache issues which currently require a whole different set of interfaces to handle properly. The biggest problem is that of virtual aliasing in the data cache @@ -255,8 +275,8 @@ interface. It does not give the archite what exactly is going on, and there is no context to base a judgment on about whether an alias is possible at all. The new interfaces to deal with D-cache aliasing are meant to address this by telling the -architecture specific code exactly which is going on at the proper points -in time. +architecture specific code exactly which is going on at the proper +points in time. Here is the new interface: diff -urpN linux-2.4.9-linus/Documentation/cachetlb.txt.orig linux-2.4.9-larpage/Documentation/cachetlb.txt.orig --- linux-2.4.9-linus/Documentation/cachetlb.txt.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/Documentation/cachetlb.txt.orig 2002-11-20 02:02:13.000000000 -0800 @@ -0,0 +1,359 @@ + Cache and TLB Flushing + Under Linux + + David S. Miller + +This document describes the cache/tlb flushing interfaces called +by the Linux VM subsystem. It enumerates over each interface, +describes it's intended purpose, and what side effect is expected +after the interface is invoked. + +The side effects described below are stated for a uniprocessor +implementation, and what is to happen on that single processor. The +SMP cases are a simple extension, in that you just extend the +definition such that the side effect for a particular interface occurs +on all processors in the system. Don't let this scare you into +thinking SMP cache/tlb flushing must be so inefficient, this is in +fact an area where many optimizations are possible. For example, +if it can be proven that a user address space has never executed +on a cpu (see vma->cpu_vm_mask), one need not perform a flush +for this address space on that cpu. + +First, the TLB flushing interfaces, since they are the simplest. The +"TLB" is abstracted under Linux as something the cpu uses to cache +virtual-->physical address translations obtained from the software +page tables. Meaning that if the software page tables change, it is +possible for stale translations to exist in this "TLB" cache. +Therefore when software page table changes occur, the kernel will +invoke one of the following flush methods _after_ the page table +changes occur: + +1) void flush_tlb_all(void) + + The most severe flush of all. After this interface runs, + any previous page table modification whatsoever will be + visible to the cpu. + + This is usually invoked when the kernel page tables are + changed, since such translations are "global" in nature. + +2) void flush_tlb_mm(struct mm_struct *mm) + + This interface flushes an entire user address space from + the TLB. After running, this interface must make sure that + any previous page table modifications for the address space + 'mm' will be visible to the cpu. That is, after running, + there will be no entries in the TLB for 'mm'. + + This interface is used to handle whole address space + page table operations such as what happens during + fork, and exec. + +3) void flush_tlb_range(struct mm_struct *mm, + unsigned long start, unsigned long end) + + Here we are flushing a specific range of (user) virtual + address translations from the TLB. After running, this + interface must make sure that any previous page table + modifications for the address space 'mm' in the range 'start' + to 'end' will be visible to the cpu. That is, after running, + there will be no entries in the TLB for 'mm' for virtual + addresses in the range 'start' to 'end'. + + Primarily, this is used for munmap() type operations. + + The interface is provided in hopes that the port can find + a suitably efficient method for removing multiple page + sized translations from the TLB, instead of having the kernel + call flush_tlb_page (see below) for each entry which may be + modified. + +3k) void flush_tlb_range_k(unsigned long start, unsigned long end) + + Here we are flushing a specific range of kernel virtual + address translations from the TLB. After running, this + interface must make sure that any previous page table + modifications in the range 'start' to 'end', including those + to "global" translations, are visible on this and other cpus. + + Primarily, this is for use by vfree(). + +4) void flush_tlb_page(struct vm_area_struct *vma, unsigned long address) + + This time we need to remove the MMUPAGE_SIZE sized translation + from the TLB. The 'vma' is the backing structure used by + Linux to keep track of mmap'd regions for a process, the + address space is available via vma->vm_mm. Also, one may + test (vma->vm_flags & VM_EXEC) to see if this region is + executable (and thus could be in the 'instruction TLB' in + split-tlb type setups). + + After running, this interface must make sure that any previous + page table modification for address space 'vma->vm_mm' for + user virtual address 'page' will be visible to the cpu. That + is, after running, there will be no entries in the TLB for + 'vma->vm_mm' for virtual address 'page'. + + Primarily, this is used in fault processing; but if large + kernel pages (PAGE_SIZE a multiple of MMUPAGE_SIZE) are + enabled, then flush_tlb_range() is generally preferred. + +5) void flush_tlb_pgtables(struct mm_struct *mm, + unsigned long start, unsigned long end) + + The software page tables for address space 'mm' for virtual + addresses in the range 'start' to 'end' are being torn down. + + Some platforms cache the lowest level of the software page tables + in a linear virtually mapped array, to make TLB miss processing + more efficient. On such platforms, since the TLB is caching the + software page table structure, it needs to be flushed when parts + of the software page table tree are unlinked/freed. + + Sparc64 is one example of a platform which does this. + + Usually, when munmap()'ing an area of user virtual address + space, the kernel leaves the page table parts around and just + marks the individual pte's as invalid. However, if very large + portions of the address space are unmapped, the kernel frees up + those portions of the software page tables to prevent potential + excessive kernel memory usage caused by erratic mmap/mmunmap + sequences. It is at these times that flush_tlb_pgtables will + be invoked. + +6) void update_mmu_cache(struct vm_area_struct *vma, + unsigned long address, pte_t pte) + + At the end of every page fault, this routine is invoked to + tell the architecture specific code that a translation + described by "pte" now exists at virtual address "address" + for address space "vma->vm_mm", in the software page tables. + + A port may use this information in any way it so chooses. + For example, it could use this event to pre-load TLB + translations for software managed TLB configurations. + The sparc64 port currently does this. + + update_mmu_cache() is an empty macro on i386, and large kernel + pages (PAGE_SIZE a multiple of MMUPAGE_SIZE) have not yet been + implemented on other architectures: it may need to be replaced. + +Next, we have the cache flushing interfaces. In general, when Linux +is changing an existing virtual-->physical mapping to a new value, +the sequence will be in one of the following forms: + + 1) flush_cache_mm(mm); + change_all_page_tables_of(mm); + flush_tlb_mm(mm); + + 2) flush_cache_range(mm, start, end); + change_range_of_page_tables(mm, start, end); + flush_tlb_range(mm, start, end); + + 3) flush_cache_page(vma, address); + set_pte(pte_pointer, new_pte_val); + flush_tlb_page(vma, address); + +The cache level flush will always be first, because this allows +us to properly handle systems whose caches are strict and require +a virtual-->physical translation to exist for a virtual address +when that virtual address is flushed from the cache. The HyperSparc +cpu is one such cpu with this attribute. + +The cache flushing routines below need only deal with cache flushing +to the extent that it is necessary for a particular cpu. Mostly, +these routines must be implemented for cpus which have virtually +indexed caches which must be flushed when virtual-->physical +translations are changed or removed. So, for example, the physically +indexed physically tagged caches of IA32 processors have no need to +implement these interfaces since the caches are fully synchronized +and have no dependency on translation information. + +Here are the routines, one by one: + +1) void flush_cache_all(void) + + The most severe flush of all. After this interface runs, + the entire cpu cache is flushed. + + This is usually invoked when the kernel page tables are + changed, since such translations are "global" in nature. + +2) void flush_cache_mm(struct mm_struct *mm) + + This interface flushes an entire user address space from + the caches. That is, after running, there will be no cache + lines associated with 'mm'. + + This interface is used to handle whole address space + page table operations such as what happens during + fork, exit, and exec. + +3) void flush_cache_range(struct mm_struct *mm, + unsigned long start, unsigned long end) + + Here we are flushing a specific range of (user) virtual + addresses from the cache. After running, there will be no + entries in the cache for 'mm' for virtual addresses in the + range 'start' to 'end'. + + Primarily, this is used for munmap() type operations. + + The interface is provided in hopes that the port can find + a suitably efficient method for removing multiple page + sized regions from the cache, instead of having the kernel + call flush_cache_page (see below) for each entry which may be + modified. + +4) void flush_cache_page(struct vm_area_struct *vma, unsigned long address) + + This time we need to remove a PAGE_SIZE sized range + from the cache. The 'vma' is the backing structure used by + Linux to keep track of mmap'd regions for a process, the + address space is available via vma->vm_mm. Also, one may + test (vma->vm_flags & VM_EXEC) to see if this region is + executable (and thus could be in the 'instruction cache' in + "Harvard" type cache layouts). + + After running, there will be no entries in the cache for + 'vma->vm_mm' for virtual address 'page'. + + This is used primarily during fault processing. + + flush_cache_page() is an empty macro on i386, and large kernel + pages (PAGE_SIZE a multiple of MMUPAGE_SIZE) have not yet been + implemented on other architectures: it may need to be replaced. + +There exists another whole class of cpu cache issues which currently +require a whole different set of interfaces to handle properly. +The biggest problem is that of virtual aliasing in the data cache +of a processor. + +Is your port susceptible to virtual aliasing in it's D-cache? +Well, if your D-cache is virtually indexed, is larger in size than +PAGE_SIZE, and does not prevent multiple cache lines for the same +physical address from existing at once, you have this problem. + +If your D-cache has this problem, first define asm/shmparam.h SHMLBA +properly, it should essentially be the size of your virtually +addressed D-cache (or if the size is variable, the largest possible +size). This setting will force the SYSv IPC layer to only allow user +processes to mmap shared memory at address which are a multiple of +this value. + +NOTE: This does not fix shared mmaps, check out the sparc64 port for +one way to solve this (in particular SPARC_FLAG_MMAPSHARED). + +Next, you have two methods to solve the D-cache aliasing issue for all +other cases. Please keep in mind that fact that, for a given page +mapped into some user address space, there is always at least one more +mapping, that of the kernel in it's linear mapping starting at +PAGE_OFFSET. So immediately, once the first user maps a given +physical page into its address space, by implication the D-cache +aliasing problem has the potential to exist since the kernel already +maps this page at its virtual address. + +First, I describe the old method to deal with this problem. I am +describing it for documentation purposes, but it is deprecated and the +latter method I describe next should be used by all new ports and all +existing ports should move over to the new mechanism as well. + + flush_page_to_ram(struct page *page) + + The physical page 'page' is about to be place into the + user address space of a process. If it is possible for + stores done recently by the kernel into this physical + page, to not be visible to an arbitrary mapping in userspace, + you must flush this page from the D-cache. + + If the D-cache is writeback in nature, the dirty data (if + any) for this physical page must be written back to main + memory before the cache lines are invalidated. + +Admittedly, the author did not think very much when designing this +interface. It does not give the architecture enough information about +what exactly is going on, and there is no context to base a judgment +on about whether an alias is possible at all. The new interfaces to +deal with D-cache aliasing are meant to address this by telling the +architecture specific code exactly which is going on at the proper points +in time. + +Here is the new interface: + + void copy_user_page(void *to, void *from, unsigned long address) + void clear_user_page(void *to, unsigned long address) + + These two routines store data in user anonymous or COW + pages. It allows a port to efficiently avoid D-cache alias + issues between userspace and the kernel. + + For example, a port may temporarily map 'from' and 'to' to + kernel virtual addresses during the copy. The virtual address + for these two pages is chosen in such a way that the kernel + load/store instructions happen to virtual addresses which are + of the same "color" as the user mapping of the page. Sparc64 + for example, uses this technique. + + The "address" parameter tells the virtual address where the + user will ultimately this page mapped. + + If D-cache aliasing is not an issue, these two routines may + simply call memcpy/memset directly and do nothing more. + + void flush_dcache_page(struct page *page) + + Any time the kernel writes to a page cache page, _OR_ + the kernel is about to read from a page cache page and + user space shared/writable mappings of this page potentially + exist, this routine is called. + + NOTE: This routine need only be called for page cache pages + which can potentially ever be mapped into the address + space of a user process. So for example, VFS layer code + handling vfs symlinks in the page cache need not call + this interface at all. + + The phrase "kernel writes to a page cache page" means, + specifically, that the kernel executes store instructions + that dirty data in that page at the page->virtual mapping + of that page. It is important to flush here to handle + D-cache aliasing, to make sure these kernel stores are + visible to user space mappings of that page. + + The corollary case is just as important, if there are users + which have shared+writable mappings of this file, we must make + sure that kernel reads of these pages will see the most recent + stores done by the user. + + If D-cache aliasing is not an issue, this routine may + simply be defined as a nop on that architecture. + + There is a bit set aside in page->flags (PG_arch_1) as + "architecture private". The kernel guarantees that, + for pagecache pages, it will clear this bit when such + a page first enters the pagecache. + + This allows these interfaces to be implemented much more + efficiently. It allows one to "defer" (perhaps indefinitely) + the actual flush if there are currently no user processes + mapping this page. See sparc64's flush_dcache_page and + update_mmu_cache implementations for an example of how to go + about doing this. + + The idea is, first at flush_dcache_page() time, if + page->mapping->i_mmap{,_shared} are empty lists, just mark the + architecture private page flag bit. Later, in + update_mmu_cache(), a check is made of this flag bit, and if + set the flush is done and the flag bit is cleared. + + void flush_icache_range(unsigned long start, unsigned long end) + When the kernel stores into addresses that it will execute + out of (eg when loading modules), this function is called. + + If the icache does not snoop stores then this routine will need + to flush it. + + void flush_icache_page(struct vm_area_struct *vma, struct page *page) + All the functionality of flush_icache_page can be implemented in + flush_dcache_page and update_mmu_cache. In 2.5 the hope is to + remove this interface completely. diff -urpN linux-2.4.9-linus/Documentation/filesystems/cramfs.txt linux-2.4.9-larpage/Documentation/filesystems/cramfs.txt --- linux-2.4.9-linus/Documentation/filesystems/cramfs.txt 2001-07-19 16:14:53.000000000 -0700 +++ linux-2.4.9-larpage/Documentation/filesystems/cramfs.txt 2002-11-20 02:02:13.000000000 -0800 @@ -40,11 +40,10 @@ the update lasts only as long as the ino which the timestamp reverts to 1970, i.e. moves backwards in time. Currently, cramfs must be written and read with architectures of the -same endianness, and can be read only by kernels with PAGE_CACHE_SIZE -== 4096. At least the latter of these is a bug, but it hasn't been -decided what the best fix is. For the moment if you have larger pages -you can just change the #define in mkcramfs.c, so long as you don't -mind the filesystem becoming unreadable to future kernels. +same endianness. mkcramfs and kernel now agree on blocksize 4096. +If you have larger pages, you can change the #define in cramfs_fs.h +to use a larger blocksize with better compression, so long as you +don't mind the filesystem being unreadable on other systems. For /usr/share/magic diff -urpN linux-2.4.9-linus/Documentation/filesystems/proc.txt linux-2.4.9-larpage/Documentation/filesystems/proc.txt --- linux-2.4.9-linus/Documentation/filesystems/proc.txt 2001-04-06 10:42:48.000000000 -0700 +++ linux-2.4.9-larpage/Documentation/filesystems/proc.txt 2002-11-20 02:02:14.000000000 -0800 @@ -172,10 +172,10 @@ Table 1-2: Contents of the statm files size total program size resident size of memory portions shared number of pages that are shared - trs number of pages that are 'code' - drs number of pages of data/stack - lrs number of pages of library - dt number of dirty pages + text number of pages that are 'code' + stack number of pages of stack + data number of pages of data + dirty number of dirty pages .............................................................................. 1.2 Kernel data diff -urpN linux-2.4.9-linus/Documentation/sound/cs46xx linux-2.4.9-larpage/Documentation/sound/cs46xx --- linux-2.4.9-linus/Documentation/sound/cs46xx 2001-05-19 17:43:05.000000000 -0700 +++ linux-2.4.9-larpage/Documentation/sound/cs46xx 2002-11-20 02:02:14.000000000 -0800 @@ -96,7 +96,7 @@ under Linux, a smaller buffer allows mor applications (e.g. games). A larger buffer allows some of the apps (esound) to not underrun the dma buffer as easily. As default, use 32k (order=3) rather than 64k as some of the games work more responsively. -(2^N) * PAGE_SIZE = allocated buffer size +(2^N) * MMUPAGE_SIZE = allocated buffer size MODULE_PARM(cs_debuglevel, "i"); MODULE_PARM(cs_debugmask, "i"); diff -urpN linux-2.4.9-linus/arch/i386/kernel/apic.c linux-2.4.9-larpage/arch/i386/kernel/apic.c --- linux-2.4.9-linus/arch/i386/kernel/apic.c 2001-06-20 11:06:38.000000000 -0700 +++ linux-2.4.9-larpage/arch/i386/kernel/apic.c 2002-11-20 02:02:14.000000000 -0800 @@ -354,7 +354,7 @@ void __init init_apic_mappings(void) * could use the real zero-page, but it's safer * this way if some buggy code writes to this page ... */ - apic_phys = (unsigned long) alloc_bootmem_pages(PAGE_SIZE); + apic_phys = (unsigned long) alloc_bootmem_pages(MMUPAGE_SIZE); apic_phys = __pa(apic_phys); } set_fixmap_nocache(FIX_APIC_BASE, apic_phys); @@ -376,7 +376,7 @@ void __init init_apic_mappings(void) if (smp_found_config) { ioapic_phys = mp_ioapics[i].mpc_apicaddr; } else { - ioapic_phys = (unsigned long) alloc_bootmem_pages(PAGE_SIZE); + ioapic_phys = (unsigned long) alloc_bootmem_pages(MMUPAGE_SIZE); ioapic_phys = __pa(ioapic_phys); } set_fixmap_nocache(idx, ioapic_phys); diff -urpN linux-2.4.9-linus/arch/i386/kernel/mpparse.c linux-2.4.9-larpage/arch/i386/kernel/mpparse.c --- linux-2.4.9-linus/arch/i386/kernel/mpparse.c 2001-08-06 10:29:39.000000000 -0700 +++ linux-2.4.9-larpage/arch/i386/kernel/mpparse.c 2002-11-20 02:02:14.000000000 -0800 @@ -565,9 +565,9 @@ static int __init smp_scan_config (unsig smp_found_config = 1; printk("found SMP MP-table at %08lx\n", virt_to_phys(mpf)); - reserve_bootmem(virt_to_phys(mpf), PAGE_SIZE); + reserve_bootmem(virt_to_phys(mpf), MMUPAGE_SIZE); if (mpf->mpf_physptr) - reserve_bootmem(mpf->mpf_physptr, PAGE_SIZE); + reserve_bootmem(mpf->mpf_physptr, MMUPAGE_SIZE); mpf_found = mpf; return 1; } diff -urpN linux-2.4.9-linus/arch/i386/kernel/mpparse.c.orig linux-2.4.9-larpage/arch/i386/kernel/mpparse.c.orig --- linux-2.4.9-linus/arch/i386/kernel/mpparse.c.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/arch/i386/kernel/mpparse.c.orig 2001-08-06 10:29:39.000000000 -0700 @@ -0,0 +1,651 @@ +/* + * Intel Multiprocessor Specificiation 1.1 and 1.4 + * compliant MP-table parsing routines. + * + * (c) 1995 Alan Cox, Building #3 + * (c) 1998, 1999, 2000 Ingo Molnar + * + * Fixes + * Erich Boleyn : MP v1.4 and additional changes. + * Alan Cox : Added EBDA scanning + * Ingo Molnar : various cleanups and rewrites + * Maciej W. Rozycki : Bits for default MP configurations + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +/* Have we found an MP table */ +int smp_found_config; + +/* + * Various Linux-internal data structures created from the + * MP-table. + */ +int apic_version [MAX_APICS]; +int mp_bus_id_to_type [MAX_MP_BUSSES]; +int mp_bus_id_to_pci_bus [MAX_MP_BUSSES] = { [0 ... MAX_MP_BUSSES-1] = -1 }; +int mp_current_pci_id; +int pic_mode; +unsigned long mp_lapic_addr; + +/* Processor that is doing the boot up */ +unsigned int boot_cpu_id = -1U; +/* Internal processor count */ +static unsigned int num_processors; + +/* Bitmask of physically existing CPUs */ +unsigned long phys_cpu_present_map; + +/* + * Intel MP BIOS table parsing routines: + */ + +#ifndef CONFIG_X86_VISWS_APIC +/* + * Checksum an MP configuration block. + */ + +static int __init mpf_checksum(unsigned char *mp, int len) +{ + int sum = 0; + + while (len--) + sum += *mp++; + + return sum & 0xFF; +} + +/* + * Processor encoding in an MP configuration block + */ + +static char __init *mpc_family(int family,int model) +{ + static char n[32]; + static char *model_defs[]= + { + "80486DX","80486DX", + "80486SX","80486DX/2 or 80487", + "80486SL","80486SX/2", + "Unknown","80486DX/2-WB", + "80486DX/4","80486DX/4-WB" + }; + + switch (family) { + case 0x04: + if (model < 10) + return model_defs[model]; + break; + + case 0x05: + return("Pentium(tm)"); + + case 0x06: + return("Pentium(tm) Pro"); + + case 0x0F: + if (model == 0x00) + return("Pentium 4(tm)"); + if (model == 0x0F) + return("Special controller"); + } + sprintf(n,"Unknown CPU [%d:%d]",family, model); + return n; +} + +static void __init MP_processor_info (struct mpc_config_processor *m) +{ + int ver; + + if (!(m->mpc_cpuflag & CPU_ENABLED)) + return; + + printk("Processor #%d %s APIC version %d\n", + m->mpc_apicid, + mpc_family( (m->mpc_cpufeature & CPU_FAMILY_MASK)>>8 , + (m->mpc_cpufeature & CPU_MODEL_MASK)>>4), + m->mpc_apicver); + + if (m->mpc_featureflag&(1<<0)) + Dprintk(" Floating point unit present.\n"); + if (m->mpc_featureflag&(1<<7)) + Dprintk(" Machine Exception supported.\n"); + if (m->mpc_featureflag&(1<<8)) + Dprintk(" 64 bit compare & exchange supported.\n"); + if (m->mpc_featureflag&(1<<9)) + Dprintk(" Internal APIC present.\n"); + if (m->mpc_featureflag&(1<<11)) + Dprintk(" SEP present.\n"); + if (m->mpc_featureflag&(1<<12)) + Dprintk(" MTRR present.\n"); + if (m->mpc_featureflag&(1<<13)) + Dprintk(" PGE present.\n"); + if (m->mpc_featureflag&(1<<14)) + Dprintk(" MCA present.\n"); + if (m->mpc_featureflag&(1<<15)) + Dprintk(" CMOV present.\n"); + if (m->mpc_featureflag&(1<<16)) + Dprintk(" PAT present.\n"); + if (m->mpc_featureflag&(1<<17)) + Dprintk(" PSE present.\n"); + if (m->mpc_featureflag&(1<<18)) + Dprintk(" PSN present.\n"); + if (m->mpc_featureflag&(1<<19)) + Dprintk(" Cache Line Flush Instruction present.\n"); + /* 20 Reserved */ + if (m->mpc_featureflag&(1<<21)) + Dprintk(" Debug Trace and EMON Store present.\n"); + if (m->mpc_featureflag&(1<<22)) + Dprintk(" ACPI Thermal Throttle Registers present.\n"); + if (m->mpc_featureflag&(1<<23)) + Dprintk(" MMX present.\n"); + if (m->mpc_featureflag&(1<<24)) + Dprintk(" FXSR present.\n"); + if (m->mpc_featureflag&(1<<25)) + Dprintk(" XMM present.\n"); + if (m->mpc_featureflag&(1<<26)) + Dprintk(" Willamette New Instructions present.\n"); + if (m->mpc_featureflag&(1<<27)) + Dprintk(" Self Snoop present.\n"); + /* 28 Reserved */ + if (m->mpc_featureflag&(1<<29)) + Dprintk(" Thermal Monitor present.\n"); + /* 30, 31 Reserved */ + + + if (m->mpc_cpuflag & CPU_BOOTPROCESSOR) { + Dprintk(" Bootup CPU\n"); + boot_cpu_id = m->mpc_apicid; + } + num_processors++; + + if (m->mpc_apicid > MAX_APICS) { + printk("Processor #%d INVALID. (Max ID: %d).\n", + m->mpc_apicid, MAX_APICS); + return; + } + ver = m->mpc_apicver; + + phys_cpu_present_map |= 1 << m->mpc_apicid; + /* + * Validate version + */ + if (ver == 0x0) { + printk("BIOS bug, APIC version is 0 for CPU#%d! fixing up to 0x10. (tell your hw vendor)\n", m->mpc_apicid); + ver = 0x10; + } + apic_version[m->mpc_apicid] = ver; +} + +static void __init MP_bus_info (struct mpc_config_bus *m) +{ + char str[7]; + + memcpy(str, m->mpc_bustype, 6); + str[6] = 0; + Dprintk("Bus #%d is %s\n", m->mpc_busid, str); + + if (strncmp(str, BUSTYPE_ISA, sizeof(BUSTYPE_ISA)-1) == 0) { + mp_bus_id_to_type[m->mpc_busid] = MP_BUS_ISA; + } else if (strncmp(str, BUSTYPE_EISA, sizeof(BUSTYPE_EISA)-1) == 0) { + mp_bus_id_to_type[m->mpc_busid] = MP_BUS_EISA; + } else if (strncmp(str, BUSTYPE_PCI, sizeof(BUSTYPE_PCI)-1) == 0) { + mp_bus_id_to_type[m->mpc_busid] = MP_BUS_PCI; + mp_bus_id_to_pci_bus[m->mpc_busid] = mp_current_pci_id; + mp_current_pci_id++; + } else if (strncmp(str, BUSTYPE_MCA, sizeof(BUSTYPE_MCA)-1) == 0) { + mp_bus_id_to_type[m->mpc_busid] = MP_BUS_MCA; + } else { + printk("Unknown bustype %s - ignoring\n", str); + } +} + +static void __init MP_ioapic_info (struct mpc_config_ioapic *m) +{ + if (!(m->mpc_flags & MPC_APIC_USABLE)) + return; + + printk("I/O APIC #%d Version %d at 0x%lX.\n", + m->mpc_apicid, m->mpc_apicver, m->mpc_apicaddr); + if (nr_ioapics >= MAX_IO_APICS) { + printk("Max # of I/O APICs (%d) exceeded (found %d).\n", + MAX_IO_APICS, nr_ioapics); + panic("Recompile kernel with bigger MAX_IO_APICS!.\n"); + } + mp_ioapics[nr_ioapics] = *m; + nr_ioapics++; +} + +static void __init MP_intsrc_info (struct mpc_config_intsrc *m) +{ + mp_irqs [mp_irq_entries] = *m; + Dprintk("Int: type %d, pol %d, trig %d, bus %d," + " IRQ %02x, APIC ID %x, APIC INT %02x\n", + m->mpc_irqtype, m->mpc_irqflag & 3, + (m->mpc_irqflag >> 2) & 3, m->mpc_srcbus, + m->mpc_srcbusirq, m->mpc_dstapic, m->mpc_dstirq); + if (++mp_irq_entries == MAX_IRQ_SOURCES) + panic("Max # of irq sources exceeded!!\n"); +} + +static void __init MP_lintsrc_info (struct mpc_config_lintsrc *m) +{ + Dprintk("Lint: type %d, pol %d, trig %d, bus %d," + " IRQ %02x, APIC ID %x, APIC LINT %02x\n", + m->mpc_irqtype, m->mpc_irqflag & 3, + (m->mpc_irqflag >> 2) &3, m->mpc_srcbusid, + m->mpc_srcbusirq, m->mpc_destapic, m->mpc_destapiclint); + /* + * Well it seems all SMP boards in existence + * use ExtINT/LVT1 == LINT0 and + * NMI/LVT2 == LINT1 - the following check + * will show us if this assumptions is false. + * Until then we do not have to add baggage. + */ + if ((m->mpc_irqtype == mp_ExtINT) && + (m->mpc_destapiclint != 0)) + BUG(); + if ((m->mpc_irqtype == mp_NMI) && + (m->mpc_destapiclint != 1)) + BUG(); +} + +/* + * Read/parse the MPC + */ + +static int __init smp_read_mpc(struct mp_config_table *mpc) +{ + char str[16]; + int count=sizeof(*mpc); + unsigned char *mpt=((unsigned char *)mpc)+count; + + if (memcmp(mpc->mpc_signature,MPC_SIGNATURE,4)) { + panic("SMP mptable: bad signature [%c%c%c%c]!\n", + mpc->mpc_signature[0], + mpc->mpc_signature[1], + mpc->mpc_signature[2], + mpc->mpc_signature[3]); + return 0; + } + if (mpf_checksum((unsigned char *)mpc,mpc->mpc_length)) { + panic("SMP mptable: checksum error!\n"); + return 0; + } + if (mpc->mpc_spec!=0x01 && mpc->mpc_spec!=0x04) { + printk(KERN_ERR "SMP mptable: bad table version (%d)!!\n", + mpc->mpc_spec); + return 0; + } + if (!mpc->mpc_lapic) { + printk(KERN_ERR "SMP mptable: null local APIC address!\n"); + return 0; + } + memcpy(str,mpc->mpc_oem,8); + str[8]=0; + printk("OEM ID: %s ",str); + + memcpy(str,mpc->mpc_productid,12); + str[12]=0; + printk("Product ID: %s ",str); + + printk("APIC at: 0x%lX\n",mpc->mpc_lapic); + + /* save the local APIC address, it might be non-default */ + mp_lapic_addr = mpc->mpc_lapic; + + /* + * Now process the configuration blocks. + */ + while (count < mpc->mpc_length) { + switch(*mpt) { + case MP_PROCESSOR: + { + struct mpc_config_processor *m= + (struct mpc_config_processor *)mpt; + MP_processor_info(m); + mpt += sizeof(*m); + count += sizeof(*m); + break; + } + case MP_BUS: + { + struct mpc_config_bus *m= + (struct mpc_config_bus *)mpt; + MP_bus_info(m); + mpt += sizeof(*m); + count += sizeof(*m); + break; + } + case MP_IOAPIC: + { + struct mpc_config_ioapic *m= + (struct mpc_config_ioapic *)mpt; + MP_ioapic_info(m); + mpt+=sizeof(*m); + count+=sizeof(*m); + break; + } + case MP_INTSRC: + { + struct mpc_config_intsrc *m= + (struct mpc_config_intsrc *)mpt; + + MP_intsrc_info(m); + mpt+=sizeof(*m); + count+=sizeof(*m); + break; + } + case MP_LINTSRC: + { + struct mpc_config_lintsrc *m= + (struct mpc_config_lintsrc *)mpt; + MP_lintsrc_info(m); + mpt+=sizeof(*m); + count+=sizeof(*m); + break; + } + } + } + if (!num_processors) + printk(KERN_ERR "SMP mptable: no processors registered!\n"); + return num_processors; +} + +static void __init construct_default_ioirq_mptable(int mpc_default_type) +{ + struct mpc_config_intsrc intsrc; + int i; + + intsrc.mpc_type = MP_INTSRC; + intsrc.mpc_irqflag = 0; /* conforming */ + intsrc.mpc_srcbus = 0; + intsrc.mpc_dstapic = mp_ioapics[0].mpc_apicid; + + intsrc.mpc_irqtype = mp_INT; + for (i = 0; i < 16; i++) { + switch (mpc_default_type) { + case 2: + if (i == 0 || i == 13) + continue; /* IRQ0 & IRQ13 not connected */ + /* fall through */ + default: + if (i == 2) + continue; /* IRQ2 is never connected */ + } + + intsrc.mpc_srcbusirq = i; + intsrc.mpc_dstirq = i ? i : 2; /* IRQ0 to INTIN2 */ + MP_intsrc_info(&intsrc); + } + + intsrc.mpc_irqtype = mp_ExtINT; + intsrc.mpc_srcbusirq = 0; + intsrc.mpc_dstirq = 0; /* 8259A to INTIN0 */ + MP_intsrc_info(&intsrc); +} + +static inline void __init construct_default_ISA_mptable(int mpc_default_type) +{ + struct mpc_config_processor processor; + struct mpc_config_bus bus; + struct mpc_config_ioapic ioapic; + struct mpc_config_lintsrc lintsrc; + int linttypes[2] = { mp_ExtINT, mp_NMI }; + int i; + + /* + * local APIC has default address + */ + mp_lapic_addr = APIC_DEFAULT_PHYS_BASE; + + /* + * 2 CPUs, numbered 0 & 1. + */ + processor.mpc_type = MP_PROCESSOR; + /* Either an integrated APIC or a discrete 82489DX. */ + processor.mpc_apicver = mpc_default_type > 4 ? 0x10 : 0x01; + processor.mpc_cpuflag = CPU_ENABLED; + processor.mpc_cpufeature = (boot_cpu_data.x86 << 8) | + (boot_cpu_data.x86_model << 4) | + boot_cpu_data.x86_mask; + processor.mpc_featureflag = boot_cpu_data.x86_capability[0]; + processor.mpc_reserved[0] = 0; + processor.mpc_reserved[1] = 0; + for (i = 0; i < 2; i++) { + processor.mpc_apicid = i; + MP_processor_info(&processor); + } + + bus.mpc_type = MP_BUS; + bus.mpc_busid = 0; + switch (mpc_default_type) { + default: + printk("???\nUnknown standard configuration %d\n", + mpc_default_type); + /* fall through */ + case 1: + case 5: + memcpy(bus.mpc_bustype, "ISA ", 6); + break; + case 2: + case 6: + case 3: + memcpy(bus.mpc_bustype, "EISA ", 6); + break; + case 4: + case 7: + memcpy(bus.mpc_bustype, "MCA ", 6); + } + MP_bus_info(&bus); + if (mpc_default_type > 4) { + bus.mpc_busid = 1; + memcpy(bus.mpc_bustype, "PCI ", 6); + MP_bus_info(&bus); + } + + ioapic.mpc_type = MP_IOAPIC; + ioapic.mpc_apicid = 2; + ioapic.mpc_apicver = mpc_default_type > 4 ? 0x10 : 0x01; + ioapic.mpc_flags = MPC_APIC_USABLE; + ioapic.mpc_apicaddr = 0xFEC00000; + MP_ioapic_info(&ioapic); + + /* + * We set up most of the low 16 IO-APIC pins according to MPS rules. + */ + construct_default_ioirq_mptable(mpc_default_type); + + lintsrc.mpc_type = MP_LINTSRC; + lintsrc.mpc_irqflag = 0; /* conforming */ + lintsrc.mpc_srcbusid = 0; + lintsrc.mpc_srcbusirq = 0; + lintsrc.mpc_destapic = MP_APIC_ALL; + for (i = 0; i < 2; i++) { + lintsrc.mpc_irqtype = linttypes[i]; + lintsrc.mpc_destapiclint = i; + MP_lintsrc_info(&lintsrc); + } +} + +static struct intel_mp_floating *mpf_found; + +/* + * Scan the memory blocks for an SMP configuration block. + */ +void __init get_smp_config (void) +{ + struct intel_mp_floating *mpf = mpf_found; + printk("Intel MultiProcessor Specification v1.%d\n", mpf->mpf_specification); + if (mpf->mpf_feature2 & (1<<7)) { + printk(" IMCR and PIC compatibility mode.\n"); + pic_mode = 1; + } else { + printk(" Virtual Wire compatibility mode.\n"); + pic_mode = 0; + } + + /* + * Now see if we need to read further. + */ + if (mpf->mpf_feature1 != 0) { + + printk("Default MP configuration #%d\n", mpf->mpf_feature1); + construct_default_ISA_mptable(mpf->mpf_feature1); + + } else if (mpf->mpf_physptr) { + + /* + * Read the physical hardware table. Anything here will + * override the defaults. + */ + if (!smp_read_mpc((void *)mpf->mpf_physptr)) { + smp_found_config = 0; + printk(KERN_ERR "BIOS bug, MP table errors detected!...\n"); + printk(KERN_ERR "... disabling SMP support. (tell your hw vendor)\n"); + return; + } + /* + * If there are no explicit MP IRQ entries, then we are + * broken. We set up most of the low 16 IO-APIC pins to + * ISA defaults and hope it will work. + */ + if (!mp_irq_entries) { + struct mpc_config_bus bus; + + printk("BIOS bug, no explicit IRQ entries, using default mptable. (tell your hw vendor)\n"); + + bus.mpc_type = MP_BUS; + bus.mpc_busid = 0; + memcpy(bus.mpc_bustype, "ISA ", 6); + MP_bus_info(&bus); + + construct_default_ioirq_mptable(0); + } + + } else + BUG(); + + printk("Processors: %d\n", num_processors); + /* + * Only use the first configuration found. + */ +} + +static int __init smp_scan_config (unsigned long base, unsigned long length) +{ + unsigned long *bp = phys_to_virt(base); + struct intel_mp_floating *mpf; + + Dprintk("Scan SMP from %p for %ld bytes.\n", bp,length); + if (sizeof(*mpf) != 16) + printk("Error: MPF size\n"); + + while (length > 0) { + mpf = (struct intel_mp_floating *)bp; + if ((*bp == SMP_MAGIC_IDENT) && + (mpf->mpf_length == 1) && + !mpf_checksum((unsigned char *)bp, 16) && + ((mpf->mpf_specification == 1) + || (mpf->mpf_specification == 4)) ) { + + smp_found_config = 1; + printk("found SMP MP-table at %08lx\n", + virt_to_phys(mpf)); + reserve_bootmem(virt_to_phys(mpf), PAGE_SIZE); + if (mpf->mpf_physptr) + reserve_bootmem(mpf->mpf_physptr, PAGE_SIZE); + mpf_found = mpf; + return 1; + } + bp += 4; + length -= 16; + } + return 0; +} + +void __init find_intel_smp (void) +{ + unsigned int address; + + /* + * FIXME: Linux assumes you have 640K of base ram.. + * this continues the error... + * + * 1) Scan the bottom 1K for a signature + * 2) Scan the top 1K of base RAM + * 3) Scan the 64K of bios + */ + if (smp_scan_config(0x0,0x400) || + smp_scan_config(639*0x400,0x400) || + smp_scan_config(0xF0000,0x10000)) + return; + /* + * If it is an SMP machine we should know now, unless the + * configuration is in an EISA/MCA bus machine with an + * extended bios data area. + * + * there is a real-mode segmented pointer pointing to the + * 4K EBDA area at 0x40E, calculate and scan it here. + * + * NOTE! There are Linux loaders that will corrupt the EBDA + * area, and as such this kind of SMP config may be less + * trustworthy, simply because the SMP table may have been + * stomped on during early boot. These loaders are buggy and + * should be fixed. + */ + + address = *(unsigned short *)phys_to_virt(0x40E); + address <<= 4; + smp_scan_config(address, 0x1000); + if (smp_found_config) + printk(KERN_WARNING "WARNING: MP table in the EBDA can be UNSAFE, contact linux-smp@vger.kernel.org if you experience SMP problems!\n"); +} + +#else + +/* + * The Visual Workstation is Intel MP compliant in the hardware + * sense, but it doesnt have a BIOS(-configuration table). + * No problem for Linux. + */ +void __init find_visws_smp(void) +{ + smp_found_config = 1; + + phys_cpu_present_map |= 2; /* or in id 1 */ + apic_version[1] |= 0x10; /* integrated APIC */ + apic_version[0] |= 0x10; + + mp_lapic_addr = APIC_DEFAULT_PHYS_BASE; +} + +#endif + +/* + * - Intel MP Configuration Table + * - or SGI Visual Workstation configuration + */ +void __init find_smp_config (void) +{ +#ifdef CONFIG_X86_IO_APIC + find_intel_smp(); +#endif +#ifdef CONFIG_VISWS + find_visws_smp(); +#endif +} + diff -urpN linux-2.4.9-linus/arch/i386/kernel/mtrr.c linux-2.4.9-larpage/arch/i386/kernel/mtrr.c --- linux-2.4.9-linus/arch/i386/kernel/mtrr.c 2001-05-24 15:14:08.000000000 -0700 +++ linux-2.4.9-larpage/arch/i386/kernel/mtrr.c 2002-11-20 02:02:18.000000000 -0800 @@ -536,13 +536,13 @@ static void intel_get_mtrr (unsigned int rdmsr(MTRRphysBase_MSR(reg), base_lo, base_hi); /* Work out the shifted address mask. */ - mask_lo = size_or_mask | mask_hi << (32 - PAGE_SHIFT) - | mask_lo >> PAGE_SHIFT; + mask_lo = size_or_mask | mask_hi << (32 - MMUPAGE_SHIFT) + | mask_lo >> MMUPAGE_SHIFT; /* This works correctly if size is a power of two, i.e. a contiguous range. */ *size = -mask_lo; - *base = base_hi << (32 - PAGE_SHIFT) | base_lo >> PAGE_SHIFT; + *base = base_hi << (32 - MMUPAGE_SHIFT) | base_lo >> MMUPAGE_SHIFT; *type = base_lo & 0xff; } /* End Function intel_get_mtrr */ @@ -568,7 +568,7 @@ static void cyrix_get_arr (unsigned int /* Enable interrupts if it was enabled previously */ __restore_flags (flags); shift = ((unsigned char *) base)[1] & 0x0f; - *base >>= PAGE_SHIFT; + *base >>= MMUPAGE_SHIFT; /* Power of two, at least 4K on ARR0-ARR6, 256K on ARR7 * Note: shift==0xf means 4G, this is unsupported. @@ -611,7 +611,7 @@ static void amd_get_mtrr (unsigned int r /* Upper dword is region 1, lower is region 0 */ if (reg == 1) low = high; /* The base masks off on the right alignment */ - *base = (low & 0xFFFE0000) >> PAGE_SHIFT; + *base = (low & 0xFFFE0000) >> MMUPAGE_SHIFT; *type = 0; if (low & 1) *type = MTRR_TYPE_UNCACHABLE; if (low & 2) *type = MTRR_TYPE_WRCOMB; @@ -636,7 +636,7 @@ static void amd_get_mtrr (unsigned int r * *128K ... */ low = (~low) & 0x1FFFC; - *size = (low + 4) << (15 - PAGE_SHIFT); + *size = (low + 4) << (15 - MMUPAGE_SHIFT); return; } /* End Function amd_get_mtrr */ @@ -662,8 +662,8 @@ void mtrr_centaur_report_mcr(int mcr, u3 static void centaur_get_mcr (unsigned int reg, unsigned long *base, unsigned long *size, mtrr_type *type) { - *base = centaur_mcr[reg].high >> PAGE_SHIFT; - *size = -(centaur_mcr[reg].low & 0xfffff000) >> PAGE_SHIFT; + *base = centaur_mcr[reg].high >> MMUPAGE_SHIFT; + *size = -(centaur_mcr[reg].low & 0xfffff000) >> MMUPAGE_SHIFT; *type = MTRR_TYPE_WRCOMB; /* If it is there, it is write-combining */ if(centaur_mcr_type==1 && ((centaur_mcr[reg].low&31)&2)) *type = MTRR_TYPE_UNCACHABLE; @@ -700,10 +700,10 @@ static void intel_set_mtrr_up (unsigned } else { - wrmsr (MTRRphysBase_MSR (reg), base << PAGE_SHIFT | type, - (base & size_and_mask) >> (32 - PAGE_SHIFT)); - wrmsr (MTRRphysMask_MSR (reg), -size << PAGE_SHIFT | 0x800, - (-size & size_and_mask) >> (32 - PAGE_SHIFT)); + wrmsr (MTRRphysBase_MSR (reg), base << MMUPAGE_SHIFT | type, + (base & size_and_mask) >> (32 - MMUPAGE_SHIFT)); + wrmsr (MTRRphysMask_MSR (reg), -size << MMUPAGE_SHIFT | 0x800, + (-size & size_and_mask) >> (32 - MMUPAGE_SHIFT)); } if (do_safe) set_mtrr_done (&ctxt); } /* End Function intel_set_mtrr_up */ @@ -744,7 +744,7 @@ static void cyrix_set_arr_up (unsigned i } if (do_safe) set_mtrr_prepare (&ctxt); - base <<= PAGE_SHIFT; + base <<= MMUPAGE_SHIFT; setCx86(arr, ((unsigned char *) &base)[3]); setCx86(arr+1, ((unsigned char *) &base)[2]); setCx86(arr+2, (((unsigned char *) &base)[1]) | arr_size); @@ -785,8 +785,8 @@ static void amd_set_mtrr_up (unsigned in desired 111 1111 1111 1100 mask But ~(x - 1) == ~x + 1 == -x. Two's complement rocks! */ - regs[reg] = (-size>>(15-PAGE_SHIFT) & 0x0001FFFC) - | (base<>(15-MMUPAGE_SHIFT) & 0x0001FFFC) + | (base< MTRR_TYPE_WRCOMB || size < (1 << (17-PAGE_SHIFT)) || + if ( type > MTRR_TYPE_WRCOMB || size < (1 << (17-MMUPAGE_SHIFT)) || (size & ~(size-1))-size || ( base & (size-1) ) ) return -EINVAL; break; @@ -1267,7 +1267,7 @@ int mtrr_add_page(unsigned long base, un boot_cpu_data.x86_model == 1 && boot_cpu_data.x86_mask <= 7 ) { - if ( base & ((1 << (22-PAGE_SHIFT))-1) ) + if ( base & ((1 << (22-MMUPAGE_SHIFT))-1) ) { printk (KERN_WARNING "mtrr: base(0x%lx000) is not 4 MiB aligned\n", base); return -EINVAL; @@ -1426,13 +1426,13 @@ int mtrr_add(unsigned long base, unsigne the error code. */ - if ( (base & (PAGE_SIZE - 1)) || (size & (PAGE_SIZE - 1)) ) + if ( (base & (MMUPAGE_SIZE - 1)) || (size & (MMUPAGE_SIZE - 1)) ) { printk ("mtrr: size and base must be multiples of 4 kiB\n"); printk ("mtrr: size: 0x%lx base: 0x%lx\n", size, base); return -EINVAL; } - return mtrr_add_page(base >> PAGE_SHIFT, size >> PAGE_SHIFT, type, increment); + return mtrr_add_page(base >> MMUPAGE_SHIFT, size >> MMUPAGE_SHIFT, type, increment); } /* End Function mtrr_add */ /** @@ -1547,13 +1547,13 @@ int mtrr_del (int reg, unsigned long bas the error code. */ { - if ( (base & (PAGE_SIZE - 1)) || (size & (PAGE_SIZE - 1)) ) + if ( (base & (MMUPAGE_SIZE - 1)) || (size & (MMUPAGE_SIZE - 1)) ) { printk ("mtrr: size and base must be multiples of 4 kiB\n"); printk ("mtrr: size: 0x%lx base: 0x%lx\n", size, base); return -EINVAL; } - return mtrr_del_page(reg, base >> PAGE_SHIFT, size >> PAGE_SHIFT); + return mtrr_del_page(reg, base >> MMUPAGE_SHIFT, size >> MMUPAGE_SHIFT); } #ifdef USERSPACE_INTERFACE @@ -1576,14 +1576,14 @@ static int mtrr_file_add (unsigned long file->private_data = fcount; } if (!page) { - if ( (base & (PAGE_SIZE - 1)) || (size & (PAGE_SIZE - 1)) ) + if ( (base & (MMUPAGE_SIZE - 1)) || (size & (MMUPAGE_SIZE - 1)) ) { printk ("mtrr: size and base must be multiples of 4 kiB\n"); printk ("mtrr: size: 0x%lx base: 0x%lx\n", size, base); return -EINVAL; } - base >>= PAGE_SHIFT; - size >>= PAGE_SHIFT; + base >>= MMUPAGE_SHIFT; + size >>= MMUPAGE_SHIFT; } reg = mtrr_add_page (base, size, type, 1); if (reg >= 0) ++fcount[reg]; @@ -1597,14 +1597,14 @@ static int mtrr_file_del (unsigned long unsigned int *fcount = file->private_data; if (!page) { - if ( (base & (PAGE_SIZE - 1)) || (size & (PAGE_SIZE - 1)) ) + if ( (base & (MMUPAGE_SIZE - 1)) || (size & (MMUPAGE_SIZE - 1)) ) { printk ("mtrr: size and base must be multiples of 4 kiB\n"); printk ("mtrr: size: 0x%lx base: 0x%lx\n", size, base); return -EINVAL; } - base >>= PAGE_SHIFT; - size >>= PAGE_SHIFT; + base >>= MMUPAGE_SHIFT; + size >>= MMUPAGE_SHIFT; } reg = mtrr_del_page (-1, base, size); if (reg < 0) return reg; @@ -1682,8 +1682,8 @@ static ssize_t mtrr_write (struct file * for (i = 0; i < MTRR_NUM_TYPES; ++i) { if ( strcmp (ptr, mtrr_strings[i]) ) continue; - base >>= PAGE_SHIFT; - size >>= PAGE_SHIFT; + base >>= MMUPAGE_SHIFT; + size >>= MMUPAGE_SHIFT; err = mtrr_add_page ((unsigned long)base, (unsigned long)size, i, 1); if (err < 0) return err; return len; @@ -1742,8 +1742,8 @@ static int mtrr_ioctl (struct inode *ino if (gentry.base + gentry.size > 0x100000 || gentry.size == 0x100000) gentry.base = gentry.size = gentry.type = 0; else { - gentry.base <<= PAGE_SHIFT; - gentry.size <<= PAGE_SHIFT; + gentry.base <<= MMUPAGE_SHIFT; + gentry.size <<= MMUPAGE_SHIFT; gentry.type = type; } @@ -1846,21 +1846,21 @@ static void compute_ascii (void) if (size == 0) usage_table[i] = 0; else { - if (size < (0x100000 >> PAGE_SHIFT)) + if (size < (0x100000 >> MMUPAGE_SHIFT)) { /* less than 1MB */ factor = 'K'; - size <<= PAGE_SHIFT - 10; + size <<= MMUPAGE_SHIFT - 10; } else { factor = 'M'; - size >>= 20 - PAGE_SHIFT; + size >>= 20 - MMUPAGE_SHIFT; } sprintf (ascii_buffer + ascii_buf_bytes, "reg%02i: base=0x%05lx000 (%4liMB), size=%4li%cB: %s, count=%d\n", - i, base, base >> (20 - PAGE_SHIFT), size, factor, + i, base, base >> (20 - MMUPAGE_SHIFT), size, factor, attrib_to_str (type), usage_table[i]); ascii_buf_bytes += strlen (ascii_buffer + ascii_buf_bytes); } @@ -2118,7 +2118,7 @@ static int __init mtrr_setup(void) if (boot_cpu_data.x86 == 7 && (cpuid_eax(0x80000000) >= 0x80000008)) { u32 phys_addr; phys_addr = cpuid_eax(0x80000008) & 0xff ; - size_or_mask = ~((1 << (phys_addr - PAGE_SHIFT)) - 1); + size_or_mask = ~((1 << (phys_addr - MMUPAGE_SHIFT)) - 1); size_and_mask = ~size_or_mask & 0xfff00000; break; } diff -urpN linux-2.4.9-linus/arch/i386/kernel/pci-i386.c linux-2.4.9-larpage/arch/i386/kernel/pci-i386.c --- linux-2.4.9-linus/arch/i386/kernel/pci-i386.c 2001-05-19 18:07:04.000000000 -0700 +++ linux-2.4.9-larpage/arch/i386/kernel/pci-i386.c 2002-11-20 02:02:18.000000000 -0800 @@ -375,7 +375,7 @@ int pci_mmap_page_range(struct pci_dev * /* Write-combine setting is ignored, it is changed via the mtrr * interfaces on this platform. */ - if (remap_page_range(vma->vm_start, vma->vm_pgoff << PAGE_SHIFT, + if (remap_page_range(vma->vm_start, vma->vm_pgoff << MMUPAGE_SHIFT, vma->vm_end - vma->vm_start, vma->vm_page_prot)) return -EAGAIN; diff -urpN linux-2.4.9-linus/arch/i386/kernel/process.c linux-2.4.9-larpage/arch/i386/kernel/process.c --- linux-2.4.9-linus/arch/i386/kernel/process.c 2001-07-25 18:19:11.000000000 -0700 +++ linux-2.4.9-larpage/arch/i386/kernel/process.c 2002-11-20 02:02:18.000000000 -0800 @@ -562,16 +562,16 @@ void dump_thread(struct pt_regs * regs, /* changed the size calculations - should hopefully work better. lbt */ dump->magic = CMAGIC; dump->start_code = 0; - dump->start_stack = regs->esp & ~(PAGE_SIZE - 1); - dump->u_tsize = ((unsigned long) current->mm->end_code) >> PAGE_SHIFT; - dump->u_dsize = ((unsigned long) (current->mm->brk + (PAGE_SIZE-1))) >> PAGE_SHIFT; + dump->start_stack = regs->esp & ~(MMUPAGE_SIZE - 1); + dump->u_tsize = ((unsigned long) current->mm->end_code) >> MMUPAGE_SHIFT; + dump->u_dsize = ((unsigned long) (current->mm->brk + (MMUPAGE_SIZE-1))) >> MMUPAGE_SHIFT; dump->u_dsize -= dump->u_tsize; dump->u_ssize = 0; for (i = 0; i < 8; i++) dump->u_debugreg[i] = current->thread.debugreg[i]; if (dump->start_stack < TASK_SIZE) - dump->u_ssize = ((unsigned long) (TASK_SIZE - dump->start_stack)) >> PAGE_SHIFT; + dump->u_ssize = ((unsigned long) (TASK_SIZE - dump->start_stack)) >> MMUPAGE_SHIFT; dump->regs.ebx = regs->ebx; dump->regs.ecx = regs->ecx; diff -urpN linux-2.4.9-linus/arch/i386/kernel/process.c.orig linux-2.4.9-larpage/arch/i386/kernel/process.c.orig --- linux-2.4.9-linus/arch/i386/kernel/process.c.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/arch/i386/kernel/process.c.orig 2001-07-25 18:19:11.000000000 -0700 @@ -0,0 +1,775 @@ +/* + * linux/arch/i386/kernel/process.c + * + * Copyright (C) 1995 Linus Torvalds + * + * Pentium III FXSR, SSE support + * Gareth Hughes , May 2000 + */ + +/* + * This file handles the architecture-dependent parts of process handling.. + */ + +#define __KERNEL_SYSCALLS__ +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#ifdef CONFIG_MATH_EMULATION +#include +#endif + +#include + +asmlinkage void ret_from_fork(void) __asm__("ret_from_fork"); + +int hlt_counter; + +/* + * Powermanagement idle function, if any.. + */ +void (*pm_idle)(void); + +/* + * Power off function, if any + */ +void (*pm_power_off)(void); + +void disable_hlt(void) +{ + hlt_counter++; +} + +void enable_hlt(void) +{ + hlt_counter--; +} + +/* + * We use this if we don't have any better + * idle routine.. + */ +static void default_idle(void) +{ + if (current_cpu_data.hlt_works_ok && !hlt_counter) { + __cli(); + if (!current->need_resched) + safe_halt(); + else + __sti(); + } +} + +/* + * On SMP it's slightly faster (but much more power-consuming!) + * to poll the ->need_resched flag instead of waiting for the + * cross-CPU IPI to arrive. Use this option with caution. + */ +static void poll_idle (void) +{ + int oldval; + + __sti(); + + /* + * Deal with another CPU just having chosen a thread to + * run here: + */ + oldval = xchg(¤t->need_resched, -1); + + if (!oldval) + asm volatile( + "2:" + "cmpl $-1, %0;" + "rep; nop;" + "je 2b;" + : :"m" (current->need_resched)); +} + +/* + * The idle thread. There's no useful work to be + * done, so just try to conserve power and have a + * low exit latency (ie sit in a loop waiting for + * somebody to say that they'd like to reschedule) + */ +void cpu_idle (void) +{ + /* endless idle loop with no priority at all */ + init_idle(); + current->nice = 20; + current->counter = -100; + + while (1) { + void (*idle)(void) = pm_idle; + if (!idle) + idle = default_idle; + while (!current->need_resched) + idle(); + schedule(); + check_pgt_cache(); + } +} + +static int __init idle_setup (char *str) +{ + if (!strncmp(str, "poll", 4)) { + printk("using polling idle threads.\n"); + pm_idle = poll_idle; + } + + return 1; +} + +__setup("idle=", idle_setup); + +static long no_idt[2]; +static int reboot_mode; +int reboot_thru_bios; + +static int __init reboot_setup(char *str) +{ + while(1) { + switch (*str) { + case 'w': /* "warm" reboot (no memory testing etc) */ + reboot_mode = 0x1234; + break; + case 'c': /* "cold" reboot (with memory testing etc) */ + reboot_mode = 0x0; + break; + case 'b': /* "bios" reboot by jumping through the BIOS */ + reboot_thru_bios = 1; + break; + case 'h': /* "hard" reboot by toggling RESET and/or crashing the CPU */ + reboot_thru_bios = 0; + break; + } + if((str = strchr(str,',')) != NULL) + str++; + else + break; + } + return 1; +} + +__setup("reboot=", reboot_setup); + +/* The following code and data reboots the machine by switching to real + mode and jumping to the BIOS reset entry point, as if the CPU has + really been reset. The previous version asked the keyboard + controller to pulse the CPU reset line, which is more thorough, but + doesn't work with at least one type of 486 motherboard. It is easy + to stop this code working; hence the copious comments. */ + +static unsigned long long +real_mode_gdt_entries [3] = +{ + 0x0000000000000000ULL, /* Null descriptor */ + 0x00009a000000ffffULL, /* 16-bit real-mode 64k code at 0x00000000 */ + 0x000092000100ffffULL /* 16-bit real-mode 64k data at 0x00000100 */ +}; + +static struct +{ + unsigned short size __attribute__ ((packed)); + unsigned long long * base __attribute__ ((packed)); +} +real_mode_gdt = { sizeof (real_mode_gdt_entries) - 1, real_mode_gdt_entries }, +real_mode_idt = { 0x3ff, 0 }; + +/* This is 16-bit protected mode code to disable paging and the cache, + switch to real mode and jump to the BIOS reset code. + + The instruction that switches to real mode by writing to CR0 must be + followed immediately by a far jump instruction, which set CS to a + valid value for real mode, and flushes the prefetch queue to avoid + running instructions that have already been decoded in protected + mode. + + Clears all the flags except ET, especially PG (paging), PE + (protected-mode enable) and TS (task switch for coprocessor state + save). Flushes the TLB after paging has been disabled. Sets CD and + NW, to disable the cache on a 486, and invalidates the cache. This + is more like the state of a 486 after reset. I don't know if + something else should be done for other chips. + + More could be done here to set up the registers as if a CPU reset had + occurred; hopefully real BIOSs don't assume much. */ + +static unsigned char real_mode_switch [] = +{ + 0x66, 0x0f, 0x20, 0xc0, /* movl %cr0,%eax */ + 0x66, 0x83, 0xe0, 0x11, /* andl $0x00000011,%eax */ + 0x66, 0x0d, 0x00, 0x00, 0x00, 0x60, /* orl $0x60000000,%eax */ + 0x66, 0x0f, 0x22, 0xc0, /* movl %eax,%cr0 */ + 0x66, 0x0f, 0x22, 0xd8, /* movl %eax,%cr3 */ + 0x66, 0x0f, 0x20, 0xc3, /* movl %cr0,%ebx */ + 0x66, 0x81, 0xe3, 0x00, 0x00, 0x00, 0x60, /* andl $0x60000000,%ebx */ + 0x74, 0x02, /* jz f */ + 0x0f, 0x08, /* invd */ + 0x24, 0x10, /* f: andb $0x10,al */ + 0x66, 0x0f, 0x22, 0xc0 /* movl %eax,%cr0 */ +}; +static unsigned char jump_to_bios [] = +{ + 0xea, 0x00, 0x00, 0xff, 0xff /* ljmp $0xffff,$0x0000 */ +}; + +static inline void kb_wait(void) +{ + int i; + + for (i=0; i<0x10000; i++) + if ((inb_p(0x64) & 0x02) == 0) + break; +} + +/* + * Switch to real mode and then execute the code + * specified by the code and length parameters. + * We assume that length will aways be less that 100! + */ +void machine_real_restart(unsigned char *code, int length) +{ + unsigned long flags; + + cli(); + + /* Write zero to CMOS register number 0x0f, which the BIOS POST + routine will recognize as telling it to do a proper reboot. (Well + that's what this book in front of me says -- it may only apply to + the Phoenix BIOS though, it's not clear). At the same time, + disable NMIs by setting the top bit in the CMOS address register, + as we're about to do peculiar things to the CPU. I'm not sure if + `outb_p' is needed instead of just `outb'. Use it to be on the + safe side. (Yes, CMOS_WRITE does outb_p's. - Paul G.) + */ + + spin_lock_irqsave(&rtc_lock, flags); + CMOS_WRITE(0x00, 0x8f); + spin_unlock_irqrestore(&rtc_lock, flags); + + /* Remap the kernel at virtual address zero, as well as offset zero + from the kernel segment. This assumes the kernel segment starts at + virtual address PAGE_OFFSET. */ + + memcpy (swapper_pg_dir, swapper_pg_dir + USER_PGD_PTRS, + sizeof (swapper_pg_dir [0]) * KERNEL_PGD_PTRS); + + /* Make sure the first page is mapped to the start of physical memory. + It is normally not mapped, to trap kernel NULL pointer dereferences. */ + + pg0[0] = _PAGE_RW | _PAGE_PRESENT; + + /* + * Use `swapper_pg_dir' as our page directory. + */ + asm volatile("movl %0,%%cr3": :"r" (__pa(swapper_pg_dir))); + + /* Write 0x1234 to absolute memory location 0x472. The BIOS reads + this on booting to tell it to "Bypass memory test (also warm + boot)". This seems like a fairly standard thing that gets set by + REBOOT.COM programs, and the previous reset routine did this + too. */ + + *((unsigned short *)0x472) = reboot_mode; + + /* For the switch to real mode, copy some code to low memory. It has + to be in the first 64k because it is running in 16-bit mode, and it + has to have the same physical and virtual address, because it turns + off paging. Copy it near the end of the first page, out of the way + of BIOS variables. */ + + memcpy ((void *) (0x1000 - sizeof (real_mode_switch) - 100), + real_mode_switch, sizeof (real_mode_switch)); + memcpy ((void *) (0x1000 - 100), code, length); + + /* Set up the IDT for real mode. */ + + __asm__ __volatile__ ("lidt %0" : : "m" (real_mode_idt)); + + /* Set up a GDT from which we can load segment descriptors for real + mode. The GDT is not used in real mode; it is just needed here to + prepare the descriptors. */ + + __asm__ __volatile__ ("lgdt %0" : : "m" (real_mode_gdt)); + + /* Load the data segment registers, and thus the descriptors ready for + real mode. The base address of each segment is 0x100, 16 times the + selector value being loaded here. This is so that the segment + registers don't have to be reloaded after switching to real mode: + the values are consistent for real mode operation already. */ + + __asm__ __volatile__ ("movl $0x0010,%%eax\n" + "\tmovl %%eax,%%ds\n" + "\tmovl %%eax,%%es\n" + "\tmovl %%eax,%%fs\n" + "\tmovl %%eax,%%gs\n" + "\tmovl %%eax,%%ss" : : : "eax"); + + /* Jump to the 16-bit code that we copied earlier. It disables paging + and the cache, switches to real mode, and jumps to the BIOS reset + entry point. */ + + __asm__ __volatile__ ("ljmp $0x0008,%0" + : + : "i" ((void *) (0x1000 - sizeof (real_mode_switch) - 100))); +} + +void machine_restart(char * __unused) +{ +#if CONFIG_SMP + /* + * Stop all CPUs and turn off local APICs and the IO-APIC, so + * other OSs see a clean IRQ state. + */ + smp_send_stop(); + disable_IO_APIC(); +#endif + + if(!reboot_thru_bios) { + /* rebooting needs to touch the page at absolute addr 0 */ + *((unsigned short *)__va(0x472)) = reboot_mode; + for (;;) { + int i; + for (i=0; i<100; i++) { + kb_wait(); + udelay(50); + outb(0xfe,0x64); /* pulse reset low */ + udelay(50); + } + /* That didn't work - force a triple fault.. */ + __asm__ __volatile__("lidt %0": :"m" (no_idt)); + __asm__ __volatile__("int3"); + } + } + + machine_real_restart(jump_to_bios, sizeof(jump_to_bios)); +} + +void machine_halt(void) +{ +} + +void machine_power_off(void) +{ + if (pm_power_off) + pm_power_off(); +} + +extern void show_trace(unsigned long* esp); + +void show_regs(struct pt_regs * regs) +{ + unsigned long cr0 = 0L, cr2 = 0L, cr3 = 0L, cr4 = 0L; + + printk("\n"); + printk("EIP: %04x:[<%08lx>] CPU: %d",0xffff & regs->xcs,regs->eip, smp_processor_id()); + if (regs->xcs & 3) + printk(" ESP: %04x:%08lx",0xffff & regs->xss,regs->esp); + printk(" EFLAGS: %08lx\n",regs->eflags); + printk("EAX: %08lx EBX: %08lx ECX: %08lx EDX: %08lx\n", + regs->eax,regs->ebx,regs->ecx,regs->edx); + printk("ESI: %08lx EDI: %08lx EBP: %08lx", + regs->esi, regs->edi, regs->ebp); + printk(" DS: %04x ES: %04x\n", + 0xffff & regs->xds,0xffff & regs->xes); + + __asm__("movl %%cr0, %0": "=r" (cr0)); + __asm__("movl %%cr2, %0": "=r" (cr2)); + __asm__("movl %%cr3, %0": "=r" (cr3)); + /* This could fault if %cr4 does not exist */ + __asm__("1: movl %%cr4, %0 \n" + "2: \n" + ".section __ex_table,\"a\" \n" + ".long 1b,2b \n" + ".previous \n" + : "=r" (cr4): "0" (0)); + printk("CR0: %08lx CR2: %08lx CR3: %08lx CR4: %08lx\n", cr0, cr2, cr3, cr4); + show_trace(®s->esp); +} + +/* + * No need to lock the MM as we are the last user + */ +void release_segments(struct mm_struct *mm) +{ + void * ldt = mm->context.segments; + + /* + * free the LDT + */ + if (ldt) { + mm->context.segments = NULL; + clear_LDT(); + vfree(ldt); + } +} + +/* + * Create a kernel thread + */ +int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags) +{ + long retval, d0; + + __asm__ __volatile__( + "movl %%esp,%%esi\n\t" + "int $0x80\n\t" /* Linux/i386 system call */ + "cmpl %%esp,%%esi\n\t" /* child or parent? */ + "je 1f\n\t" /* parent - jump */ + /* Load the argument into eax, and push it. That way, it does + * not matter whether the called function is compiled with + * -mregparm or not. */ + "movl %4,%%eax\n\t" + "pushl %%eax\n\t" + "call *%5\n\t" /* call fn */ + "movl %3,%0\n\t" /* exit */ + "int $0x80\n" + "1:\t" + :"=&a" (retval), "=&S" (d0) + :"0" (__NR_clone), "i" (__NR_exit), + "r" (arg), "r" (fn), + "b" (flags | CLONE_VM) + : "memory"); + return retval; +} + +/* + * Free current thread data structures etc.. + */ +void exit_thread(void) +{ + /* nothing to do ... */ +} + +void flush_thread(void) +{ + struct task_struct *tsk = current; + + memset(tsk->thread.debugreg, 0, sizeof(unsigned long)*8); + /* + * Forget coprocessor state.. + */ + clear_fpu(tsk); + tsk->used_math = 0; +} + +void release_thread(struct task_struct *dead_task) +{ + if (dead_task->mm) { + void * ldt = dead_task->mm->context.segments; + + // temporary debugging check + if (ldt) { + printk("WARNING: dead process %8s still has LDT? <%p>\n", + dead_task->comm, ldt); + BUG(); + } + } +} + +/* + * we do not have to muck with descriptors here, that is + * done in switch_mm() as needed. + */ +void copy_segments(struct task_struct *p, struct mm_struct *new_mm) +{ + struct mm_struct * old_mm; + void *old_ldt, *ldt; + + ldt = NULL; + old_mm = current->mm; + if (old_mm && (old_ldt = old_mm->context.segments) != NULL) { + /* + * Completely new LDT, we initialize it from the parent: + */ + ldt = vmalloc(LDT_ENTRIES*LDT_ENTRY_SIZE); + if (!ldt) + printk(KERN_WARNING "ldt allocation failed\n"); + else + memcpy(ldt, old_ldt, LDT_ENTRIES*LDT_ENTRY_SIZE); + } + new_mm->context.segments = ldt; + new_mm->context.cpuvalid = ~0UL; /* valid on all CPU's - they can't have stale data */ +} + +/* + * Save a segment. + */ +#define savesegment(seg,value) \ + asm volatile("movl %%" #seg ",%0":"=m" (*(int *)&(value))) + +int copy_thread(int nr, unsigned long clone_flags, unsigned long esp, + unsigned long unused, + struct task_struct * p, struct pt_regs * regs) +{ + struct pt_regs * childregs; + + childregs = ((struct pt_regs *) (THREAD_SIZE + (unsigned long) p)) - 1; + struct_cpy(childregs, regs); + childregs->eax = 0; + childregs->esp = esp; + + p->thread.esp = (unsigned long) childregs; + p->thread.esp0 = (unsigned long) (childregs+1); + + p->thread.eip = (unsigned long) ret_from_fork; + + savesegment(fs,p->thread.fs); + savesegment(gs,p->thread.gs); + + unlazy_fpu(current); + struct_cpy(&p->thread.i387, ¤t->thread.i387); + + return 0; +} + +/* + * fill in the user structure for a core dump.. + */ +void dump_thread(struct pt_regs * regs, struct user * dump) +{ + int i; + +/* changed the size calculations - should hopefully work better. lbt */ + dump->magic = CMAGIC; + dump->start_code = 0; + dump->start_stack = regs->esp & ~(PAGE_SIZE - 1); + dump->u_tsize = ((unsigned long) current->mm->end_code) >> PAGE_SHIFT; + dump->u_dsize = ((unsigned long) (current->mm->brk + (PAGE_SIZE-1))) >> PAGE_SHIFT; + dump->u_dsize -= dump->u_tsize; + dump->u_ssize = 0; + for (i = 0; i < 8; i++) + dump->u_debugreg[i] = current->thread.debugreg[i]; + + if (dump->start_stack < TASK_SIZE) + dump->u_ssize = ((unsigned long) (TASK_SIZE - dump->start_stack)) >> PAGE_SHIFT; + + dump->regs.ebx = regs->ebx; + dump->regs.ecx = regs->ecx; + dump->regs.edx = regs->edx; + dump->regs.esi = regs->esi; + dump->regs.edi = regs->edi; + dump->regs.ebp = regs->ebp; + dump->regs.eax = regs->eax; + dump->regs.ds = regs->xds; + dump->regs.es = regs->xes; + savesegment(fs,dump->regs.fs); + savesegment(gs,dump->regs.gs); + dump->regs.orig_eax = regs->orig_eax; + dump->regs.eip = regs->eip; + dump->regs.cs = regs->xcs; + dump->regs.eflags = regs->eflags; + dump->regs.esp = regs->esp; + dump->regs.ss = regs->xss; + + dump->u_fpvalid = dump_fpu (regs, &dump->i387); +} + +/* + * This special macro can be used to load a debugging register + */ +#define loaddebug(thread,register) \ + __asm__("movl %0,%%db" #register \ + : /* no output */ \ + :"r" (thread->debugreg[register])) + +/* + * switch_to(x,yn) should switch tasks from x to y. + * + * We fsave/fwait so that an exception goes off at the right time + * (as a call from the fsave or fwait in effect) rather than to + * the wrong process. Lazy FP saving no longer makes any sense + * with modern CPU's, and this simplifies a lot of things (SMP + * and UP become the same). + * + * NOTE! We used to use the x86 hardware context switching. The + * reason for not using it any more becomes apparent when you + * try to recover gracefully from saved state that is no longer + * valid (stale segment register values in particular). With the + * hardware task-switch, there is no way to fix up bad state in + * a reasonable manner. + * + * The fact that Intel documents the hardware task-switching to + * be slow is a fairly red herring - this code is not noticeably + * faster. However, there _is_ some room for improvement here, + * so the performance issues may eventually be a valid point. + * More important, however, is the fact that this allows us much + * more flexibility. + */ +void __switch_to(struct task_struct *prev_p, struct task_struct *next_p) +{ + struct thread_struct *prev = &prev_p->thread, + *next = &next_p->thread; + struct tss_struct *tss = init_tss + smp_processor_id(); + + unlazy_fpu(prev_p); + + /* + * Reload esp0, LDT and the page table pointer: + */ + tss->esp0 = next->esp0; + + /* + * Save away %fs and %gs. No need to save %es and %ds, as + * those are always kernel segments while inside the kernel. + */ + asm volatile("movl %%fs,%0":"=m" (*(int *)&prev->fs)); + asm volatile("movl %%gs,%0":"=m" (*(int *)&prev->gs)); + + /* + * Restore %fs and %gs. + */ + loadsegment(fs, next->fs); + loadsegment(gs, next->gs); + + /* + * Now maybe reload the debug registers + */ + if (next->debugreg[7]){ + loaddebug(next, 0); + loaddebug(next, 1); + loaddebug(next, 2); + loaddebug(next, 3); + /* no 4 and 5 */ + loaddebug(next, 6); + loaddebug(next, 7); + } + + if (prev->ioperm || next->ioperm) { + if (next->ioperm) { + /* + * 4 cachelines copy ... not good, but not that + * bad either. Anyone got something better? + * This only affects processes which use ioperm(). + * [Putting the TSSs into 4k-tlb mapped regions + * and playing VM tricks to switch the IO bitmap + * is not really acceptable.] + */ + memcpy(tss->io_bitmap, next->io_bitmap, + IO_BITMAP_SIZE*sizeof(unsigned long)); + tss->bitmap = IO_BITMAP_OFFSET; + } else + /* + * a bitmap offset pointing outside of the TSS limit + * causes a nicely controllable SIGSEGV if a process + * tries to use a port IO instruction. The first + * sys_ioperm() call sets up the bitmap properly. + */ + tss->bitmap = INVALID_IO_BITMAP_OFFSET; + } +} + +asmlinkage int sys_fork(struct pt_regs regs) +{ + return do_fork(SIGCHLD, regs.esp, ®s, 0); +} + +asmlinkage int sys_clone(struct pt_regs regs) +{ + unsigned long clone_flags; + unsigned long newsp; + + clone_flags = regs.ebx; + newsp = regs.ecx; + if (!newsp) + newsp = regs.esp; + return do_fork(clone_flags, newsp, ®s, 0); +} + +/* + * This is trivial, and on the face of it looks like it + * could equally well be done in user mode. + * + * Not so, for quite unobvious reasons - register pressure. + * In user mode vfork() cannot have a stack frame, and if + * done by calling the "clone()" system call directly, you + * do not have enough call-clobbered registers to hold all + * the information you need. + */ +asmlinkage int sys_vfork(struct pt_regs regs) +{ + return do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD, regs.esp, ®s, 0); +} + +/* + * sys_execve() executes a new program. + */ +asmlinkage int sys_execve(struct pt_regs regs) +{ + int error; + char * filename; + + filename = getname((char *) regs.ebx); + error = PTR_ERR(filename); + if (IS_ERR(filename)) + goto out; + error = do_execve(filename, (char **) regs.ecx, (char **) regs.edx, ®s); + if (error == 0) + current->ptrace &= ~PT_DTRACE; + putname(filename); +out: + return error; +} + +/* + * These bracket the sleeping functions.. + */ +extern void scheduling_functions_start_here(void); +extern void scheduling_functions_end_here(void); +#define first_sched ((unsigned long) scheduling_functions_start_here) +#define last_sched ((unsigned long) scheduling_functions_end_here) + +unsigned long get_wchan(struct task_struct *p) +{ + unsigned long ebp, esp, eip; + unsigned long stack_page; + int count = 0; + if (!p || p == current || p->state == TASK_RUNNING) + return 0; + stack_page = (unsigned long)p; + esp = p->thread.esp; + if (!stack_page || esp < stack_page || esp > 8188+stack_page) + return 0; + /* include/asm-i386/system.h:switch_to() pushes ebp last. */ + ebp = *(unsigned long *) esp; + do { + if (ebp < stack_page || ebp > 8184+stack_page) + return 0; + eip = *(unsigned long *) (ebp+4); + if (eip < first_sched || eip >= last_sched) + return eip; + ebp = *(unsigned long *) ebp; + } while (count++ < 16); + return 0; +} +#undef last_sched +#undef first_sched diff -urpN linux-2.4.9-linus/arch/i386/kernel/setup.c linux-2.4.9-larpage/arch/i386/kernel/setup.c --- linux-2.4.9-linus/arch/i386/kernel/setup.c 2001-07-11 09:31:44.000000000 -0700 +++ linux-2.4.9-larpage/arch/i386/kernel/setup.c 2002-11-20 02:02:20.000000000 -0800 @@ -808,9 +808,9 @@ void __init setup_arch(char **cmdline_p) parse_mem_cmdline(cmdline_p); -#define PFN_UP(x) (((x) + PAGE_SIZE-1) >> PAGE_SHIFT) -#define PFN_DOWN(x) ((x) >> PAGE_SHIFT) -#define PFN_PHYS(x) ((x) << PAGE_SHIFT) +#define PFN_UP(x) (((x) + MMUPAGE_SIZE-1) >> MMUPAGE_SHIFT) +#define PFN_DOWN(x) ((x) >> MMUPAGE_SHIFT) +#define PFN_PHYS(x) ((x) << MMUPAGE_SHIFT) /* * 128MB for vmalloc and initrd @@ -818,7 +818,7 @@ void __init setup_arch(char **cmdline_p) #define VMALLOC_RESERVE (unsigned long)(128 << 20) #define MAXMEM (unsigned long)(-PAGE_OFFSET-VMALLOC_RESERVE) #define MAXMEM_PFN PFN_DOWN(MAXMEM) -#define MAX_NONPAE_PFN (1 << 20) +#define MAX_NONPAE_PFN (1 << (32 - MMUPAGE_SHIFT)) /* * partially used pages are not usable - thus @@ -842,6 +842,7 @@ void __init setup_arch(char **cmdline_p) if (end > max_pfn) max_pfn = end; } + max_pfn &= ~(PAGE_MMUCOUNT - 1); /* * Determine low and high memory ranges: @@ -869,11 +870,10 @@ void __init setup_arch(char **cmdline_p) } #ifdef CONFIG_HIGHMEM - highstart_pfn = highend_pfn = max_pfn; - if (max_pfn > MAXMEM_PFN) { - highstart_pfn = MAXMEM_PFN; + highend_pfn = max_pfn; + if (highend_pfn > MAXMEM_PFN) { printk(KERN_NOTICE "%ldMB HIGHMEM available.\n", - pages_to_mb(highend_pfn - highstart_pfn)); + (highend_pfn-MAXMEM_PFN)>>(20-MMUPAGE_SHIFT)); } #endif /* @@ -921,14 +921,14 @@ void __init setup_arch(char **cmdline_p) * the (very unlikely) case of us accidentally initializing the * bootmem allocator with an invalid RAM area. */ - reserve_bootmem(HIGH_MEMORY, (PFN_PHYS(start_pfn) + - bootmap_size + PAGE_SIZE-1) - (HIGH_MEMORY)); + reserve_bootmem(HIGH_MEMORY, + PFN_PHYS(start_pfn) + bootmap_size - HIGH_MEMORY); /* * reserve physical page 0 - it's a special BIOS page on many boxes, * enabling clean reboots, SMP operation, laptop functions. */ - reserve_bootmem(0, PAGE_SIZE); + reserve_bootmem(0, MMUPAGE_SIZE); #ifdef CONFIG_SMP /* @@ -936,7 +936,7 @@ void __init setup_arch(char **cmdline_p) * FIXME: Don't need the extra page at 4K, but need to fix * trampoline before removing it. (see the GDT stuff) */ - reserve_bootmem(PAGE_SIZE, PAGE_SIZE); + reserve_bootmem(MMUPAGE_SIZE, MMUPAGE_SIZE); smp_alloc_memory(); /* AP processor realmode stacks in low memory*/ #endif @@ -960,7 +960,7 @@ void __init setup_arch(char **cmdline_p) #ifdef CONFIG_BLK_DEV_INITRD if (LOADER_TYPE && INITRD_START) { - if (INITRD_START + INITRD_SIZE <= (max_low_pfn << PAGE_SHIFT)) { + if (INITRD_START + INITRD_SIZE <= PFN_PHYS(max_low_pfn)) { reserve_bootmem(INITRD_START, INITRD_SIZE); initrd_start = INITRD_START ? INITRD_START + PAGE_OFFSET : 0; @@ -969,8 +969,7 @@ void __init setup_arch(char **cmdline_p) else { printk(KERN_ERR "initrd extends beyond end of memory " "(0x%08lx > 0x%08lx)\ndisabling initrd\n", - INITRD_START + INITRD_SIZE, - max_low_pfn << PAGE_SHIFT); + INITRD_START + INITRD_SIZE, PFN_PHYS(max_low_pfn)); initrd_start = 0; } } @@ -1013,7 +1012,7 @@ void __init setup_arch(char **cmdline_p) request_resource(&ioport_resource, standard_io_resources+i); /* Tell the PCI layer not to allocate too close to the RAM area.. */ - low_mem_size = ((max_low_pfn << PAGE_SHIFT) + 0xfffff) & ~0xfffff; + low_mem_size = ((max_low_pfn << MMUPAGE_SHIFT) + 0xfffff) & ~0xfffff; if (low_mem_size > pci_mem_start) pci_mem_start = low_mem_size; @@ -1686,19 +1685,14 @@ static void __init init_rise(struct cpui set_bit(X86_FEATURE_CX8, &c->x86_capability); } - -extern void trap_init_f00f_bug(void); - static void __init init_intel(struct cpuinfo_x86 *c) { -#ifndef CONFIG_M686 - static int f00f_workaround_enabled = 0; -#endif + static int f00f_workaround_enabled; + extern void trap_init_f00f_bug(void); extern void mcheck_init(struct cpuinfo_x86 *c); char *p = NULL; unsigned int l1i = 0, l1d = 0, l2 = 0, l3 = 0; /* Cache sizes */ -#ifndef CONFIG_M686 /* * All current models of Pentium and Pentium with MMX technology CPUs * have the F0 0F bug, which lets nonpriviledged users lock up the system. @@ -1709,12 +1703,9 @@ static void __init init_intel(struct cpu c->f00f_bug = 1; if ( !f00f_workaround_enabled ) { trap_init_f00f_bug(); - printk(KERN_NOTICE "Intel Pentium with F0 0F bug - workaround enabled.\n"); f00f_workaround_enabled = 1; } } -#endif - if (c->cpuid_level > 1) { /* supports eax=2 call */ diff -urpN linux-2.4.9-linus/arch/i386/kernel/setup.c.orig linux-2.4.9-larpage/arch/i386/kernel/setup.c.orig --- linux-2.4.9-linus/arch/i386/kernel/setup.c.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/arch/i386/kernel/setup.c.orig 2002-11-20 02:02:20.000000000 -0800 @@ -0,0 +1,2533 @@ +/* + * linux/arch/i386/kernel/setup.c + * + * Copyright (C) 1995 Linus Torvalds + * + * Enhanced CPU type detection by Mike Jagdis, Patrick St. Jean + * and Martin Mares, November 1997. + * + * Force Cyrix 6x86(MX) and M II processors to report MTRR capability + * and Cyrix "coma bug" recognition by + * Zoltán Böszörményi February 1999. + * + * Force Centaur C6 processors to report MTRR capability. + * Bart Hartgers , May 1999. + * + * Intel Mobile Pentium II detection fix. Sean Gilley, June 1999. + * + * IDT Winchip tweaks, misc clean ups. + * Dave Jones , August 1999 + * + * Support of BIGMEM added by Gerhard Wichert, Siemens AG, July 1999 + * + * Better detection of Centaur/IDT WinChip models. + * Bart Hartgers , August 1999. + * + * Memory region support + * David Parsons , July-August 1999 + * + * Cleaned up cache-detection code + * Dave Jones , October 1999 + * + * Added proper L2 cache detection for Coppermine + * Dragan Stancevic , October 1999 + * + * Added the original array for capability flags but forgot to credit + * myself :) (~1998) Fixed/cleaned up some cpu_model_info and other stuff + * Jauder Ho , January 2000 + * + * Detection for Celeron coppermine, identify_cpu() overhauled, + * and a few other clean ups. + * Dave Jones , April 2000 + * + * Pentium III FXSR, SSE support + * General FPU state handling cleanups + * Gareth Hughes , May 2000 + * + * Added proper Cascades CPU and L2 cache detection for Cascades + * and 8-way type cache happy bunch from Intel:^) + * Dragan Stancevic , May 2000 + * + * Forward port AMD Duron errata T13 from 2.2.17pre + * Dave Jones , August 2000 + * + * Forward port lots of fixes/improvements from 2.2.18pre + * Cyrix III, Pentium IV support. + * Dave Jones , October 2000 + * + * Massive cleanup of CPU detection and bug handling; + * Transmeta CPU detection, + * H. Peter Anvin , November 2000 + * + * Added E820 sanitization routine (removes overlapping memory regions); + * Brian Moyle , February 2001 + * + * VIA C3 Support. + * Dave Jones , March 2001 + */ + +/* + * This file handles the architecture-dependent parts of initialization + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#ifdef CONFIG_BLK_DEV_RAM +#include +#endif +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +/* + * Machine setup.. + */ + +char ignore_irq13; /* set if exception 16 works */ +struct cpuinfo_x86 boot_cpu_data = { 0, 0, 0, 0, -1, 1, 0, 0, -1 }; + +unsigned long mmu_cr4_features; + +/* + * Bus types .. + */ +#ifdef CONFIG_EISA +int EISA_bus; +#endif +int MCA_bus; + +/* for MCA, but anyone else can use it if they want */ +unsigned int machine_id; +unsigned int machine_submodel_id; +unsigned int BIOS_revision; +unsigned int mca_pentium_flag; + +/* For PCI or other memory-mapped resources */ +unsigned long pci_mem_start = 0x10000000; + +/* + * Setup options + */ +struct drive_info_struct { char dummy[32]; } drive_info; +struct screen_info screen_info; +struct apm_info apm_info; +struct sys_desc_table_struct { + unsigned short length; + unsigned char table[0]; +}; + +struct e820map e820; + +unsigned char aux_device_present; + +extern int root_mountflags; +extern char _text, _etext, _edata, _end; +extern unsigned long cpu_khz; + +static int disable_x86_serial_nr __initdata = 1; +static int disable_x86_fxsr __initdata = 0; + +/* + * This is set up by the setup-routine at boot-time + */ +#define PARAM ((unsigned char *)empty_zero_page) +#define SCREEN_INFO (*(struct screen_info *) (PARAM+0)) +#define EXT_MEM_K (*(unsigned short *) (PARAM+2)) +#define ALT_MEM_K (*(unsigned long *) (PARAM+0x1e0)) +#define E820_MAP_NR (*(char*) (PARAM+E820NR)) +#define E820_MAP ((struct e820entry *) (PARAM+E820MAP)) +#define APM_BIOS_INFO (*(struct apm_bios_info *) (PARAM+0x40)) +#define DRIVE_INFO (*(struct drive_info_struct *) (PARAM+0x80)) +#define SYS_DESC_TABLE (*(struct sys_desc_table_struct*)(PARAM+0xa0)) +#define MOUNT_ROOT_RDONLY (*(unsigned short *) (PARAM+0x1F2)) +#define RAMDISK_FLAGS (*(unsigned short *) (PARAM+0x1F8)) +#define ORIG_ROOT_DEV (*(unsigned short *) (PARAM+0x1FC)) +#define AUX_DEVICE_INFO (*(unsigned char *) (PARAM+0x1FF)) +#define LOADER_TYPE (*(unsigned char *) (PARAM+0x210)) +#define KERNEL_START (*(unsigned long *) (PARAM+0x214)) +#define INITRD_START (*(unsigned long *) (PARAM+0x218)) +#define INITRD_SIZE (*(unsigned long *) (PARAM+0x21c)) +#define COMMAND_LINE ((char *) (PARAM+2048)) +#define COMMAND_LINE_SIZE 256 + +#define RAMDISK_IMAGE_START_MASK 0x07FF +#define RAMDISK_PROMPT_FLAG 0x8000 +#define RAMDISK_LOAD_FLAG 0x4000 + +#ifdef CONFIG_VISWS +char visws_board_type = -1; +char visws_board_rev = -1; + +#define PIIX_PM_START 0x0F80 + +#define SIO_GPIO_START 0x0FC0 + +#define SIO_PM_START 0x0FC8 + +#define PMBASE PIIX_PM_START +#define GPIREG0 (PMBASE+0x30) +#define GPIREG(x) (GPIREG0+((x)/8)) +#define PIIX_GPI_BD_ID1 18 +#define PIIX_GPI_BD_REG GPIREG(PIIX_GPI_BD_ID1) + +#define PIIX_GPI_BD_SHIFT (PIIX_GPI_BD_ID1 % 8) + +#define SIO_INDEX 0x2e +#define SIO_DATA 0x2f + +#define SIO_DEV_SEL 0x7 +#define SIO_DEV_ENB 0x30 +#define SIO_DEV_MSB 0x60 +#define SIO_DEV_LSB 0x61 + +#define SIO_GP_DEV 0x7 + +#define SIO_GP_BASE SIO_GPIO_START +#define SIO_GP_MSB (SIO_GP_BASE>>8) +#define SIO_GP_LSB (SIO_GP_BASE&0xff) + +#define SIO_GP_DATA1 (SIO_GP_BASE+0) + +#define SIO_PM_DEV 0x8 + +#define SIO_PM_BASE SIO_PM_START +#define SIO_PM_MSB (SIO_PM_BASE>>8) +#define SIO_PM_LSB (SIO_PM_BASE&0xff) +#define SIO_PM_INDEX (SIO_PM_BASE+0) +#define SIO_PM_DATA (SIO_PM_BASE+1) + +#define SIO_PM_FER2 0x1 + +#define SIO_PM_GP_EN 0x80 + +static void +visws_get_board_type_and_rev(void) +{ + int raw; + + visws_board_type = (char)(inb_p(PIIX_GPI_BD_REG) & PIIX_GPI_BD_REG) + >> PIIX_GPI_BD_SHIFT; +/* + * Get Board rev. + * First, we have to initialize the 307 part to allow us access + * to the GPIO registers. Let's map them at 0x0fc0 which is right + * after the PIIX4 PM section. + */ + outb_p(SIO_DEV_SEL, SIO_INDEX); + outb_p(SIO_GP_DEV, SIO_DATA); /* Talk to GPIO regs. */ + + outb_p(SIO_DEV_MSB, SIO_INDEX); + outb_p(SIO_GP_MSB, SIO_DATA); /* MSB of GPIO base address */ + + outb_p(SIO_DEV_LSB, SIO_INDEX); + outb_p(SIO_GP_LSB, SIO_DATA); /* LSB of GPIO base address */ + + outb_p(SIO_DEV_ENB, SIO_INDEX); + outb_p(1, SIO_DATA); /* Enable GPIO registers. */ + +/* + * Now, we have to map the power management section to write + * a bit which enables access to the GPIO registers. + * What lunatic came up with this shit? + */ + outb_p(SIO_DEV_SEL, SIO_INDEX); + outb_p(SIO_PM_DEV, SIO_DATA); /* Talk to GPIO regs. */ + + outb_p(SIO_DEV_MSB, SIO_INDEX); + outb_p(SIO_PM_MSB, SIO_DATA); /* MSB of PM base address */ + + outb_p(SIO_DEV_LSB, SIO_INDEX); + outb_p(SIO_PM_LSB, SIO_DATA); /* LSB of PM base address */ + + outb_p(SIO_DEV_ENB, SIO_INDEX); + outb_p(1, SIO_DATA); /* Enable PM registers. */ + +/* + * Now, write the PM register which enables the GPIO registers. + */ + outb_p(SIO_PM_FER2, SIO_PM_INDEX); + outb_p(SIO_PM_GP_EN, SIO_PM_DATA); + +/* + * Now, initialize the GPIO registers. + * We want them all to be inputs which is the + * power on default, so let's leave them alone. + * So, let's just read the board rev! + */ + raw = inb_p(SIO_GP_DATA1); + raw &= 0x7f; /* 7 bits of valid board revision ID. */ + + if (visws_board_type == VISWS_320) { + if (raw < 0x6) { + visws_board_rev = 4; + } else if (raw < 0xc) { + visws_board_rev = 5; + } else { + visws_board_rev = 6; + + } + } else if (visws_board_type == VISWS_540) { + visws_board_rev = 2; + } else { + visws_board_rev = raw; + } + + printk(KERN_INFO "Silicon Graphics %s (rev %d)\n", + visws_board_type == VISWS_320 ? "320" : + (visws_board_type == VISWS_540 ? "540" : + "unknown"), + visws_board_rev); + } +#endif + + +static char command_line[COMMAND_LINE_SIZE]; + char saved_command_line[COMMAND_LINE_SIZE]; + +struct resource standard_io_resources[] = { + { "dma1", 0x00, 0x1f, IORESOURCE_BUSY }, + { "pic1", 0x20, 0x3f, IORESOURCE_BUSY }, + { "timer", 0x40, 0x5f, IORESOURCE_BUSY }, + { "keyboard", 0x60, 0x6f, IORESOURCE_BUSY }, + { "dma page reg", 0x80, 0x8f, IORESOURCE_BUSY }, + { "pic2", 0xa0, 0xbf, IORESOURCE_BUSY }, + { "dma2", 0xc0, 0xdf, IORESOURCE_BUSY }, + { "fpu", 0xf0, 0xff, IORESOURCE_BUSY } +}; + +#define STANDARD_IO_RESOURCES (sizeof(standard_io_resources)/sizeof(struct resource)) + +static struct resource code_resource = { "Kernel code", 0x100000, 0 }; +static struct resource data_resource = { "Kernel data", 0, 0 }; +static struct resource vram_resource = { "Video RAM area", 0xa0000, 0xbffff, IORESOURCE_BUSY }; + +/* System ROM resources */ +#define MAXROMS 6 +static struct resource rom_resources[MAXROMS] = { + { "System ROM", 0xF0000, 0xFFFFF, IORESOURCE_BUSY }, + { "Video ROM", 0xc0000, 0xc7fff, IORESOURCE_BUSY } +}; + +#define romsignature(x) (*(unsigned short *)(x) == 0xaa55) + +static void __init probe_roms(void) +{ + int roms = 1; + unsigned long base; + unsigned char *romstart; + + request_resource(&iomem_resource, rom_resources+0); + + /* Video ROM is standard at C000:0000 - C7FF:0000, check signature */ + for (base = 0xC0000; base < 0xE0000; base += 2048) { + romstart = bus_to_virt(base); + if (!romsignature(romstart)) + continue; + request_resource(&iomem_resource, rom_resources + roms); + roms++; + break; + } + + /* Extension roms at C800:0000 - DFFF:0000 */ + for (base = 0xC8000; base < 0xE0000; base += 2048) { + unsigned long length; + + romstart = bus_to_virt(base); + if (!romsignature(romstart)) + continue; + length = romstart[2] * 512; + if (length) { + unsigned int i; + unsigned char chksum; + + chksum = 0; + for (i = 0; i < length; i++) + chksum += romstart[i]; + + /* Good checksum? */ + if (!chksum) { + rom_resources[roms].start = base; + rom_resources[roms].end = base + length - 1; + rom_resources[roms].name = "Extension ROM"; + rom_resources[roms].flags = IORESOURCE_BUSY; + + request_resource(&iomem_resource, rom_resources + roms); + roms++; + if (roms >= MAXROMS) + return; + } + } + } + + /* Final check for motherboard extension rom at E000:0000 */ + base = 0xE0000; + romstart = bus_to_virt(base); + + if (romsignature(romstart)) { + rom_resources[roms].start = base; + rom_resources[roms].end = base + 65535; + rom_resources[roms].name = "Extension ROM"; + rom_resources[roms].flags = IORESOURCE_BUSY; + + request_resource(&iomem_resource, rom_resources + roms); + } +} + +void __init add_memory_region(unsigned long long start, + unsigned long long size, int type) +{ + int x = e820.nr_map; + + if (x == E820MAX) { + printk(KERN_ERR "Ooops! Too many entries in the memory map!\n"); + return; + } + + e820.map[x].addr = start; + e820.map[x].size = size; + e820.map[x].type = type; + e820.nr_map++; +} /* add_memory_region */ + +#define E820_DEBUG 1 + +static void __init print_memory_map(char *who) +{ + int i; + + for (i = 0; i < e820.nr_map; i++) { + printk(" %s: %016Lx - %016Lx ", who, + e820.map[i].addr, + e820.map[i].addr + e820.map[i].size); + switch (e820.map[i].type) { + case E820_RAM: printk("(usable)\n"); + break; + case E820_RESERVED: + printk("(reserved)\n"); + break; + case E820_ACPI: + printk("(ACPI data)\n"); + break; + case E820_NVS: + printk("(ACPI NVS)\n"); + break; + default: printk("type %lu\n", e820.map[i].type); + break; + } + } +} + +/* + * Sanitize the BIOS e820 map. + * + * Some e820 responses include overlapping entries. The following + * replaces the original e820 map with a new one, removing overlaps. + * + */ +static int __init sanitize_e820_map(struct e820entry * biosmap, char * pnr_map) +{ + struct change_member { + struct e820entry *pbios; /* pointer to original bios entry */ + unsigned long long addr; /* address for this change point */ + }; + struct change_member change_point_list[2*E820MAX]; + struct change_member *change_point[2*E820MAX]; + struct e820entry *overlap_list[E820MAX]; + struct e820entry new_bios[E820MAX]; + struct change_member *change_tmp; + unsigned long current_type, last_type; + unsigned long long last_addr; + int chgidx, still_changing; + int overlap_entries; + int new_bios_entry; + int old_nr, new_nr; + int i; + + /* + Visually we're performing the following (1,2,3,4 = memory types)... + + Sample memory map (w/overlaps): + ____22__________________ + ______________________4_ + ____1111________________ + _44_____________________ + 11111111________________ + ____________________33__ + ___________44___________ + __________33333_________ + ______________22________ + ___________________2222_ + _________111111111______ + _____________________11_ + _________________4______ + + Sanitized equivalent (no overlap): + 1_______________________ + _44_____________________ + ___1____________________ + ____22__________________ + ______11________________ + _________1______________ + __________3_____________ + ___________44___________ + _____________33_________ + _______________2________ + ________________1_______ + _________________4______ + ___________________2____ + ____________________33__ + ______________________4_ + */ + + /* if there's only one memory region, don't bother */ + if (*pnr_map < 2) + return -1; + + old_nr = *pnr_map; + + /* bail out if we find any unreasonable addresses in bios map */ + for (i=0; iaddr = biosmap[i].addr; + change_point[chgidx++]->pbios = &biosmap[i]; + change_point[chgidx]->addr = biosmap[i].addr + biosmap[i].size; + change_point[chgidx++]->pbios = &biosmap[i]; + } + + /* sort change-point list by memory addresses (low -> high) */ + still_changing = 1; + while (still_changing) { + still_changing = 0; + for (i=1; i < 2*old_nr; i++) { + /* if > , swap */ + /* or, if current= & last=, swap */ + if ((change_point[i]->addr < change_point[i-1]->addr) || + ((change_point[i]->addr == change_point[i-1]->addr) && + (change_point[i]->addr == change_point[i]->pbios->addr) && + (change_point[i-1]->addr != change_point[i-1]->pbios->addr)) + ) + { + change_tmp = change_point[i]; + change_point[i] = change_point[i-1]; + change_point[i-1] = change_tmp; + still_changing=1; + } + } + } + + /* create a new bios memory map, removing overlaps */ + overlap_entries=0; /* number of entries in the overlap table */ + new_bios_entry=0; /* index for creating new bios map entries */ + last_type = 0; /* start with undefined memory type */ + last_addr = 0; /* start with 0 as last starting address */ + /* loop through change-points, determining affect on the new bios map */ + for (chgidx=0; chgidx < 2*old_nr; chgidx++) + { + /* keep track of all overlapping bios entries */ + if (change_point[chgidx]->addr == change_point[chgidx]->pbios->addr) + { + /* add map entry to overlap list (> 1 entry implies an overlap) */ + overlap_list[overlap_entries++]=change_point[chgidx]->pbios; + } + else + { + /* remove entry from list (order independent, so swap with last) */ + for (i=0; ipbios) + overlap_list[i] = overlap_list[overlap_entries-1]; + } + overlap_entries--; + } + /* if there are overlapping entries, decide which "type" to use */ + /* (larger value takes precedence -- 1=usable, 2,3,4,4+=unusable) */ + current_type = 0; + for (i=0; itype > current_type) + current_type = overlap_list[i]->type; + /* continue building up new bios map based on this information */ + if (current_type != last_type) { + if (last_type != 0) { + new_bios[new_bios_entry].size = + change_point[chgidx]->addr - last_addr; + /* move forward only if the new size was non-zero */ + if (new_bios[new_bios_entry].size != 0) + if (++new_bios_entry >= E820MAX) + break; /* no more space left for new bios entries */ + } + if (current_type != 0) { + new_bios[new_bios_entry].addr = change_point[chgidx]->addr; + new_bios[new_bios_entry].type = current_type; + last_addr=change_point[chgidx]->addr; + } + last_type = current_type; + } + } + new_nr = new_bios_entry; /* retain count for new bios entries */ + + /* copy new bios mapping into original location */ + memcpy(biosmap, new_bios, new_nr*sizeof(struct e820entry)); + *pnr_map = new_nr; + + return 0; +} + +/* + * Copy the BIOS e820 map into a safe place. + * + * Sanity-check it while we're at it.. + * + * If we're lucky and live on a modern system, the setup code + * will have given us a memory map that we can use to properly + * set up memory. If we aren't, we'll fake a memory map. + * + * We check to see that the memory map contains at least 2 elements + * before we'll use it, because the detection code in setup.S may + * not be perfect and most every PC known to man has two memory + * regions: one from 0 to 640k, and one from 1mb up. (The IBM + * thinkpad 560x, for example, does not cooperate with the memory + * detection code.) + */ +static int __init copy_e820_map(struct e820entry * biosmap, int nr_map) +{ + /* Only one memory region (or negative)? Ignore it */ + if (nr_map < 2) + return -1; + + do { + unsigned long long start = biosmap->addr; + unsigned long long size = biosmap->size; + unsigned long long end = start + size; + unsigned long type = biosmap->type; + + /* Overflow in 64 bits? Ignore the memory map. */ + if (start > end) + return -1; + + /* + * Some BIOSes claim RAM in the 640k - 1M region. + * Not right. Fix it up. + */ + if (type == E820_RAM) { + if (start < 0x100000ULL && end > 0xA0000ULL) { + if (start < 0xA0000ULL) + add_memory_region(start, 0xA0000ULL-start, type); + if (end <= 0x100000ULL) + continue; + start = 0x100000ULL; + size = end - start; + } + } + add_memory_region(start, size, type); + } while (biosmap++,--nr_map); + return 0; +} + +/* + * Do NOT EVER look at the BIOS memory size location. + * It does not work on many machines. + */ +#define LOWMEMSIZE() (0x9f000) + +void __init setup_memory_region(void) +{ + char *who = "BIOS-e820"; + + /* + * Try to copy the BIOS-supplied E820-map. + * + * Otherwise fake a memory map; one section from 0k->640k, + * the next section from 1mb->appropriate_mem_k + */ + sanitize_e820_map(E820_MAP, &E820_MAP_NR); + if (copy_e820_map(E820_MAP, E820_MAP_NR) < 0) { + unsigned long mem_size; + + /* compare results from other methods and take the greater */ + if (ALT_MEM_K < EXT_MEM_K) { + mem_size = EXT_MEM_K; + who = "BIOS-88"; + } else { + mem_size = ALT_MEM_K; + who = "BIOS-e801"; + } + + e820.nr_map = 0; + add_memory_region(0, LOWMEMSIZE(), E820_RAM); + add_memory_region(HIGH_MEMORY, mem_size << 10, E820_RAM); + } + printk(KERN_INFO "BIOS-provided physical RAM map:\n"); + print_memory_map(who); +} /* setup_memory_region */ + + +static inline void parse_mem_cmdline (char ** cmdline_p) +{ + char c = ' ', *to = command_line, *from = COMMAND_LINE; + int len = 0; + int usermem = 0; + + /* Save unparsed command line copy for /proc/cmdline */ + memcpy(saved_command_line, COMMAND_LINE, COMMAND_LINE_SIZE); + saved_command_line[COMMAND_LINE_SIZE-1] = '\0'; + + for (;;) { + /* + * "mem=nopentium" disables the 4MB page tables. + * "mem=XXX[kKmM]" defines a memory region from HIGH_MEM + * to , overriding the bios size. + * "mem=XXX[KkmM]@XXX[KkmM]" defines a memory region from + * to +, overriding the bios size. + */ + if (c == ' ' && !memcmp(from, "mem=", 4)) { + if (to != command_line) + to--; + if (!memcmp(from+4, "nopentium", 9)) { + from += 9+4; + clear_bit(X86_FEATURE_PSE, &boot_cpu_data.x86_capability); + } else if (!memcmp(from+4, "exactmap", 8)) { + from += 8+4; + e820.nr_map = 0; + usermem = 1; + } else { + /* If the user specifies memory size, we + * blow away any automatically generated + * size + */ + unsigned long long start_at, mem_size; + + if (usermem == 0) { + /* first time in: zap the whitelist + * and reinitialize it with the + * standard low-memory region. + */ + e820.nr_map = 0; + usermem = 1; + add_memory_region(0, LOWMEMSIZE(), E820_RAM); + } + mem_size = memparse(from+4, &from); + if (*from == '@') + start_at = memparse(from+1, &from); + else { + start_at = HIGH_MEMORY; + mem_size -= HIGH_MEMORY; + usermem=0; + } + add_memory_region(start_at, mem_size, E820_RAM); + } + } + c = *(from++); + if (!c) + break; + if (COMMAND_LINE_SIZE <= ++len) + break; + *(to++) = c; + } + *to = '\0'; + *cmdline_p = command_line; + if (usermem) { + printk(KERN_INFO "user-defined physical RAM map:\n"); + print_memory_map("user"); + } +} + +void __init setup_arch(char **cmdline_p) +{ + unsigned long bootmap_size, low_mem_size; + unsigned long start_pfn, max_pfn, max_low_pfn; + int i; + +#ifdef CONFIG_VISWS + visws_get_board_type_and_rev(); +#endif + + ROOT_DEV = to_kdev_t(ORIG_ROOT_DEV); + drive_info = DRIVE_INFO; + screen_info = SCREEN_INFO; + apm_info.bios = APM_BIOS_INFO; + if( SYS_DESC_TABLE.length != 0 ) { + MCA_bus = SYS_DESC_TABLE.table[3] &0x2; + machine_id = SYS_DESC_TABLE.table[0]; + machine_submodel_id = SYS_DESC_TABLE.table[1]; + BIOS_revision = SYS_DESC_TABLE.table[2]; + } + aux_device_present = AUX_DEVICE_INFO; + +#ifdef CONFIG_BLK_DEV_RAM + rd_image_start = RAMDISK_FLAGS & RAMDISK_IMAGE_START_MASK; + rd_prompt = ((RAMDISK_FLAGS & RAMDISK_PROMPT_FLAG) != 0); + rd_doload = ((RAMDISK_FLAGS & RAMDISK_LOAD_FLAG) != 0); +#endif + setup_memory_region(); + + if (!MOUNT_ROOT_RDONLY) + root_mountflags &= ~MS_RDONLY; + init_mm.start_code = (unsigned long) &_text; + init_mm.end_code = (unsigned long) &_etext; + init_mm.end_data = (unsigned long) &_edata; + init_mm.brk = (unsigned long) &_end; + + code_resource.start = virt_to_bus(&_text); + code_resource.end = virt_to_bus(&_etext)-1; + data_resource.start = virt_to_bus(&_etext); + data_resource.end = virt_to_bus(&_edata)-1; + + parse_mem_cmdline(cmdline_p); + +#define PFN_UP(x) (((x) + MMUPAGE_SIZE-1) >> MMUPAGE_SHIFT) +#define PFN_DOWN(x) ((x) >> MMUPAGE_SHIFT) +#define PFN_PHYS(x) ((x) << MMUPAGE_SHIFT) + +/* + * 128MB for vmalloc and initrd + */ +#define VMALLOC_RESERVE (unsigned long)(128 << 20) +#define MAXMEM (unsigned long)(-PAGE_OFFSET-VMALLOC_RESERVE) +#define MAXMEM_PFN PFN_DOWN(MAXMEM) +#define MAX_NONPAE_PFN (1 << (32 - MMUPAGE_SHIFT)) + + /* + * partially used pages are not usable - thus + * we are rounding upwards: + */ + start_pfn = PFN_UP(__pa(&_end)); + + /* + * Find the highest page frame number we have available + */ + max_pfn = 0; + for (i = 0; i < e820.nr_map; i++) { + unsigned long start, end; + /* RAM? */ + if (e820.map[i].type != E820_RAM) + continue; + start = PFN_UP(e820.map[i].addr); + end = PFN_DOWN(e820.map[i].addr + e820.map[i].size); + if (start >= end) + continue; + if (end > max_pfn) + max_pfn = end; + } + max_pfn &= ~(PAGE_MMUCOUNT - 1); + + /* + * Determine low and high memory ranges: + */ + max_low_pfn = max_pfn; + if (max_low_pfn > MAXMEM_PFN) { + max_low_pfn = MAXMEM_PFN; +#ifndef CONFIG_HIGHMEM + /* Maximum memory usable is what is directly addressable */ + printk(KERN_WARNING "Warning only %ldMB will be used.\n", + MAXMEM>>20); + if (max_pfn > MAX_NONPAE_PFN) + printk(KERN_WARNING "Use a PAE enabled kernel.\n"); + else + printk(KERN_WARNING "Use a HIGHMEM enabled kernel.\n"); +#else /* !CONFIG_HIGHMEM */ +#ifndef CONFIG_X86_PAE + if (max_pfn > MAX_NONPAE_PFN) { + max_pfn = MAX_NONPAE_PFN; + printk(KERN_WARNING "Warning only 4GB will be used.\n"); + printk(KERN_WARNING "Use a PAE enabled kernel.\n"); + } +#endif /* !CONFIG_X86_PAE */ +#endif /* !CONFIG_HIGHMEM */ + } + +#ifdef CONFIG_HIGHMEM + highend_pfn = max_pfn; + if (highend_pfn > MAXMEM_PFN) { + printk(KERN_NOTICE "%ldMB HIGHMEM available.\n", + (highend_pfn-MAXMEM_PFN)>>(20-MMUPAGE_SHIFT)); + } +#endif + /* + * Initialize the boot-time allocator (with low memory only): + */ + bootmap_size = init_bootmem(start_pfn, max_low_pfn); + + /* + * Register fully available low RAM pages with the bootmem allocator. + */ + for (i = 0; i < e820.nr_map; i++) { + unsigned long curr_pfn, last_pfn, size; + /* + * Reserve usable low memory + */ + if (e820.map[i].type != E820_RAM) + continue; + /* + * We are rounding up the start address of usable memory: + */ + curr_pfn = PFN_UP(e820.map[i].addr); + if (curr_pfn >= max_low_pfn) + continue; + /* + * ... and at the end of the usable range downwards: + */ + last_pfn = PFN_DOWN(e820.map[i].addr + e820.map[i].size); + + if (last_pfn > max_low_pfn) + last_pfn = max_low_pfn; + + /* + * .. finally, did all the rounding and playing + * around just make the area go away? + */ + if (last_pfn <= curr_pfn) + continue; + + size = last_pfn - curr_pfn; + free_bootmem(PFN_PHYS(curr_pfn), PFN_PHYS(size)); + } + /* + * Reserve the bootmem bitmap itself as well. We do this in two + * steps (first step was init_bootmem()) because this catches + * the (very unlikely) case of us accidentally initializing the + * bootmem allocator with an invalid RAM area. + */ + reserve_bootmem(HIGH_MEMORY, + PFN_PHYS(start_pfn) + bootmap_size - HIGH_MEMORY); + + /* + * reserve physical page 0 - it's a special BIOS page on many boxes, + * enabling clean reboots, SMP operation, laptop functions. + */ + reserve_bootmem(0, MMUPAGE_SIZE); + +#ifdef CONFIG_SMP + /* + * But first pinch a few for the stack/trampoline stuff + * FIXME: Don't need the extra page at 4K, but need to fix + * trampoline before removing it. (see the GDT stuff) + */ + reserve_bootmem(MMUPAGE_SIZE, MMUPAGE_SIZE); + smp_alloc_memory(); /* AP processor realmode stacks in low memory*/ +#endif + +#ifdef CONFIG_X86_IO_APIC + /* + * Find and reserve possible boot-time SMP configuration: + */ + find_smp_config(); +#endif + paging_init(); +#ifdef CONFIG_X86_IO_APIC + /* + * get boot-time SMP configuration: + */ + if (smp_found_config) + get_smp_config(); +#endif +#ifdef CONFIG_X86_LOCAL_APIC + init_apic_mappings(); +#endif + +#ifdef CONFIG_BLK_DEV_INITRD + if (LOADER_TYPE && INITRD_START) { + if (INITRD_START + INITRD_SIZE <= PFN_PHYS(max_low_pfn)) { + reserve_bootmem(INITRD_START, INITRD_SIZE); + initrd_start = + INITRD_START ? INITRD_START + PAGE_OFFSET : 0; + initrd_end = initrd_start+INITRD_SIZE; + } + else { + printk(KERN_ERR "initrd extends beyond end of memory " + "(0x%08lx > 0x%08lx)\ndisabling initrd\n", + INITRD_START + INITRD_SIZE, PFN_PHYS(max_low_pfn)); + initrd_start = 0; + } + } +#endif + + /* + * Request address space for all standard RAM and ROM resources + * and also for regions reported as reserved by the e820. + */ + probe_roms(); + for (i = 0; i < e820.nr_map; i++) { + struct resource *res; + if (e820.map[i].addr + e820.map[i].size > 0x100000000ULL) + continue; + res = alloc_bootmem_low(sizeof(struct resource)); + switch (e820.map[i].type) { + case E820_RAM: res->name = "System RAM"; break; + case E820_ACPI: res->name = "ACPI Tables"; break; + case E820_NVS: res->name = "ACPI Non-volatile Storage"; break; + default: res->name = "reserved"; + } + res->start = e820.map[i].addr; + res->end = res->start + e820.map[i].size - 1; + res->flags = IORESOURCE_MEM | IORESOURCE_BUSY; + request_resource(&iomem_resource, res); + if (e820.map[i].type == E820_RAM) { + /* + * We dont't know which RAM region contains kernel data, + * so we try it repeatedly and let the resource manager + * test it. + */ + request_resource(res, &code_resource); + request_resource(res, &data_resource); + } + } + request_resource(&iomem_resource, &vram_resource); + + /* request I/O space for devices used on all i[345]86 PCs */ + for (i = 0; i < STANDARD_IO_RESOURCES; i++) + request_resource(&ioport_resource, standard_io_resources+i); + + /* Tell the PCI layer not to allocate too close to the RAM area.. */ + low_mem_size = ((max_low_pfn << MMUPAGE_SHIFT) + 0xfffff) & ~0xfffff; + if (low_mem_size > pci_mem_start) + pci_mem_start = low_mem_size; + +#ifdef CONFIG_VT +#if defined(CONFIG_VGA_CONSOLE) + conswitchp = &vga_con; +#elif defined(CONFIG_DUMMY_CONSOLE) + conswitchp = &dummy_con; +#endif +#endif +} + +#ifndef CONFIG_X86_TSC +static int tsc_disable __initdata = 0; + +static int __init tsc_setup(char *str) +{ + tsc_disable = 1; + return 1; +} + +__setup("notsc", tsc_setup); +#endif + +static int __init get_model_name(struct cpuinfo_x86 *c) +{ + unsigned int *v; + char *p, *q; + + if (cpuid_eax(0x80000000) < 0x80000004) + return 0; + + v = (unsigned int *) c->x86_model_id; + cpuid(0x80000002, &v[0], &v[1], &v[2], &v[3]); + cpuid(0x80000003, &v[4], &v[5], &v[6], &v[7]); + cpuid(0x80000004, &v[8], &v[9], &v[10], &v[11]); + c->x86_model_id[48] = 0; + + /* Intel chips right-justify this string for some dumb reason; + undo that brain damage */ + p = q = &c->x86_model_id[0]; + while ( *p == ' ' ) + p++; + if ( p != q ) { + while ( *p ) + *q++ = *p++; + while ( q <= &c->x86_model_id[48] ) + *q++ = '\0'; /* Zero-pad the rest */ + } + + return 1; +} + + +static void __init display_cacheinfo(struct cpuinfo_x86 *c) +{ + unsigned int n, dummy, ecx, edx, l2size; + + n = cpuid_eax(0x80000000); + + if (n >= 0x80000005) { + cpuid(0x80000005, &dummy, &dummy, &ecx, &edx); + printk(KERN_INFO "CPU: L1 I Cache: %dK (%d bytes/line), D cache %dK (%d bytes/line)\n", + edx>>24, edx&0xFF, ecx>>24, ecx&0xFF); + c->x86_cache_size=(ecx>>24)+(edx>>24); + } + + if (n < 0x80000006) /* Some chips just has a large L1. */ + return; + + ecx = cpuid_ecx(0x80000006); + l2size = ecx >> 16; + + /* AMD errata T13 (order #21922) */ + if (c->x86_vendor == X86_VENDOR_AMD && + c->x86 == 6 && + c->x86_model == 3 && + c->x86_mask == 0) { + l2size = 64; + } + + if ( l2size == 0 ) + return; /* Again, no L2 cache is possible */ + + c->x86_cache_size = l2size; + + printk(KERN_INFO "CPU: L2 Cache: %dK (%d bytes/line)\n", + l2size, ecx & 0xFF); +} + +/* + * B step AMD K6 before B 9730xxxx have hardware bugs that can cause + * misexecution of code under Linux. Owners of such processors should + * contact AMD for precise details and a CPU swap. + * + * See http://www.mygale.com/~poulot/k6bug.html + * http://www.amd.com/K6/k6docs/revgd.html + * + * The following test is erm.. interesting. AMD neglected to up + * the chip setting when fixing the bug but they also tweaked some + * performance at the same time.. + */ + +extern void vide(void); +__asm__(".align 4\nvide: ret"); + +static int __init init_amd(struct cpuinfo_x86 *c) +{ + u32 l, h; + int mbytes = max_mapnr >> (20-PAGE_SHIFT); + int r; + + /* Bit 31 in normal CPUID used for nonstandard 3DNow ID; + 3DNow is IDd by bit 31 in extended CPUID (1*32+31) anyway */ + clear_bit(0*32+31, &c->x86_capability); + + r = get_model_name(c); + + switch(c->x86) + { + case 5: + if( c->x86_model < 6 ) + { + /* Based on AMD doc 20734R - June 2000 */ + if ( c->x86_model == 0 ) { + clear_bit(X86_FEATURE_APIC, &c->x86_capability); + set_bit(X86_FEATURE_PGE, &c->x86_capability); + } + break; + } + + if ( c->x86_model == 6 && c->x86_mask == 1 ) { + const int K6_BUG_LOOP = 1000000; + int n; + void (*f_vide)(void); + unsigned long d, d2; + + printk(KERN_INFO "AMD K6 stepping B detected - "); + + /* + * It looks like AMD fixed the 2.6.2 bug and improved indirect + * calls at the same time. + */ + + n = K6_BUG_LOOP; + f_vide = vide; + rdtscl(d); + while (n--) + f_vide(); + rdtscl(d2); + d = d2-d; + + /* Knock these two lines out if it debugs out ok */ + printk(KERN_INFO "K6 BUG %ld %d (Report these if test report is incorrect)\n", d, 20*K6_BUG_LOOP); + printk(KERN_INFO "AMD K6 stepping B detected - "); + /* -- cut here -- */ + if (d > 20*K6_BUG_LOOP) + printk("system stability may be impaired when more than 32 MB are used.\n"); + else + printk("probably OK (after B9730xxxx).\n"); + printk(KERN_INFO "Please see http://www.mygale.com/~poulot/k6bug.html\n"); + } + + /* K6 with old style WHCR */ + if( c->x86_model < 8 || + (c->x86_model== 8 && c->x86_mask < 8)) + { + /* We can only write allocate on the low 508Mb */ + if(mbytes>508) + mbytes=508; + + rdmsr(0xC0000082, l, h); + if ((l&0x0000FFFF)==0) { + unsigned long flags; + l=(1<<0)|((mbytes/4)<<1); + local_irq_save(flags); + __asm__ __volatile__ ("wbinvd": : :"memory"); + wrmsr(0xC0000082, l, h); + local_irq_restore(flags); + printk(KERN_INFO "Enabling old style K6 write allocation for %d Mb\n", + mbytes); + + } + break; + } + if (c->x86_model == 8 || c->x86_model == 9 || c->x86_model == 13) + { + /* The more serious chips .. */ + + if(mbytes>4092) + mbytes=4092; + + rdmsr(0xC0000082, l, h); + if ((l&0xFFFF0000)==0) { + unsigned long flags; + l=((mbytes>>2)<<22)|(1<<16); + local_irq_save(flags); + __asm__ __volatile__ ("wbinvd": : :"memory"); + wrmsr(0xC0000082, l, h); + local_irq_restore(flags); + printk(KERN_INFO "Enabling new style K6 write allocation for %d Mb\n", + mbytes); + } + + /* Set MTRR capability flag if appropriate */ + if ( (c->x86_model == 13) || + (c->x86_model == 9) || + ((c->x86_model == 8) && + (c->x86_mask >= 8)) ) + set_bit(X86_FEATURE_K6_MTRR, &c->x86_capability); + break; + } + + break; + + case 6: /* An Athlon/Duron. We can trust the BIOS probably */ + break; + } + + display_cacheinfo(c); + return r; +} + +/* + * Read Cyrix DEVID registers (DIR) to get more detailed info. about the CPU + */ +static void do_cyrix_devid(unsigned char *dir0, unsigned char *dir1) +{ + unsigned char ccr2, ccr3; + unsigned long flags; + + /* we test for DEVID by checking whether CCR3 is writable */ + local_irq_save(flags); + ccr3 = getCx86(CX86_CCR3); + setCx86(CX86_CCR3, ccr3 ^ 0x80); + getCx86(0xc0); /* dummy to change bus */ + + if (getCx86(CX86_CCR3) == ccr3) { /* no DEVID regs. */ + ccr2 = getCx86(CX86_CCR2); + setCx86(CX86_CCR2, ccr2 ^ 0x04); + getCx86(0xc0); /* dummy */ + + if (getCx86(CX86_CCR2) == ccr2) /* old Cx486SLC/DLC */ + *dir0 = 0xfd; + else { /* Cx486S A step */ + setCx86(CX86_CCR2, ccr2); + *dir0 = 0xfe; + } + } + else { + setCx86(CX86_CCR3, ccr3); /* restore CCR3 */ + + /* read DIR0 and DIR1 CPU registers */ + *dir0 = getCx86(CX86_DIR0); + *dir1 = getCx86(CX86_DIR1); + } + local_irq_restore(flags); +} + +/* + * Cx86_dir0_msb is a HACK needed by check_cx686_cpuid/slop in bugs.h in + * order to identify the Cyrix CPU model after we're out of setup.c + * + * Actually since bugs.h doesnt even reference this perhaps someone should + * fix the documentation ??? + */ +unsigned char Cx86_dir0_msb __initdata = 0; + +static char Cx86_model[][9] __initdata = { + "Cx486", "Cx486", "5x86 ", "6x86", "MediaGX ", "6x86MX ", + "M II ", "Unknown" +}; +static char Cx486_name[][5] __initdata = { + "SLC", "DLC", "SLC2", "DLC2", "SRx", "DRx", + "SRx2", "DRx2" +}; +static char Cx486S_name[][4] __initdata = { + "S", "S2", "Se", "S2e" +}; +static char Cx486D_name[][4] __initdata = { + "DX", "DX2", "?", "?", "?", "DX4" +}; +static char Cx86_cb[] __initdata = "?.5x Core/Bus Clock"; +static char cyrix_model_mult1[] __initdata = "12??43"; +static char cyrix_model_mult2[] __initdata = "12233445"; + +/* + * Reset the slow-loop (SLOP) bit on the 686(L) which is set by some old + * BIOSes for compatability with DOS games. This makes the udelay loop + * work correctly, and improves performance. + * + * FIXME: our newer udelay uses the tsc. We dont need to frob with SLOP + */ + +extern void calibrate_delay(void) __init; + +static void __init check_cx686_slop(struct cpuinfo_x86 *c) +{ + if (Cx86_dir0_msb == 3) { + unsigned char ccr3, ccr5; + unsigned long flags; + + local_irq_save(flags); + ccr3 = getCx86(CX86_CCR3); + setCx86(CX86_CCR3, (ccr3 & 0x0f) | 0x10); /* enable MAPEN */ + ccr5 = getCx86(CX86_CCR5); + if (ccr5 & 2) + setCx86(CX86_CCR5, ccr5 & 0xfd); /* reset SLOP */ + setCx86(CX86_CCR3, ccr3); /* disable MAPEN */ + local_irq_restore(flags); + + if (ccr5 & 2) { /* possible wrong calibration done */ + printk(KERN_INFO "Recalibrating delay loop with SLOP bit reset\n"); + calibrate_delay(); + c->loops_per_jiffy = loops_per_jiffy; + } + } +} + +static void __init init_cyrix(struct cpuinfo_x86 *c) +{ + unsigned char dir0, dir0_msn, dir0_lsn, dir1 = 0; + char *buf = c->x86_model_id; + const char *p = NULL; + + /* Bit 31 in normal CPUID used for nonstandard 3DNow ID; + 3DNow is IDd by bit 31 in extended CPUID (1*32+31) anyway */ + clear_bit(0*32+31, &c->x86_capability); + + /* Cyrix used bit 24 in extended (AMD) CPUID for Cyrix MMX extensions */ + if ( test_bit(1*32+24, &c->x86_capability) ) { + clear_bit(1*32+24, &c->x86_capability); + set_bit(X86_FEATURE_CXMMX, &c->x86_capability); + } + + do_cyrix_devid(&dir0, &dir1); + + check_cx686_slop(c); + + Cx86_dir0_msb = dir0_msn = dir0 >> 4; /* identifies CPU "family" */ + dir0_lsn = dir0 & 0xf; /* model or clock multiplier */ + + /* common case step number/rev -- exceptions handled below */ + c->x86_model = (dir1 >> 4) + 1; + c->x86_mask = dir1 & 0xf; + + /* Now cook; the original recipe is by Channing Corn, from Cyrix. + * We do the same thing for each generation: we work out + * the model, multiplier and stepping. Black magic included, + * to make the silicon step/rev numbers match the printed ones. + */ + + switch (dir0_msn) { + unsigned char tmp; + + case 0: /* Cx486SLC/DLC/SRx/DRx */ + p = Cx486_name[dir0_lsn & 7]; + break; + + case 1: /* Cx486S/DX/DX2/DX4 */ + p = (dir0_lsn & 8) ? Cx486D_name[dir0_lsn & 5] + : Cx486S_name[dir0_lsn & 3]; + break; + + case 2: /* 5x86 */ + Cx86_cb[2] = cyrix_model_mult1[dir0_lsn & 5]; + p = Cx86_cb+2; + break; + + case 3: /* 6x86/6x86L */ + Cx86_cb[1] = ' '; + Cx86_cb[2] = cyrix_model_mult1[dir0_lsn & 5]; + if (dir1 > 0x21) { /* 686L */ + Cx86_cb[0] = 'L'; + p = Cx86_cb; + (c->x86_model)++; + } else /* 686 */ + p = Cx86_cb+1; + /* Emulate MTRRs using Cyrix's ARRs. */ + set_bit(X86_FEATURE_CYRIX_ARR, &c->x86_capability); + /* 6x86's contain this bug */ + c->coma_bug = 1; + break; + + case 4: /* MediaGX/GXm */ + /* + * Life sometimes gets weiiiiiiiird if we use this + * on the MediaGX. So we turn it off for now. + */ + +#ifdef CONFIG_PCI + /* It isnt really a PCI quirk directly, but the cure is the + same. The MediaGX has deep magic SMM stuff that handles the + SB emulation. It thows away the fifo on disable_dma() which + is wrong and ruins the audio. + + Bug2: VSA1 has a wrap bug so that using maximum sized DMA + causes bad things. According to NatSemi VSA2 has another + bug to do with 'hlt'. I've not seen any boards using VSA2 + and X doesn't seem to support it either so who cares 8). + VSA1 we work around however. + */ + + printk(KERN_INFO "Working around Cyrix MediaGX virtual DMA bugs.\n"); + isa_dma_bridge_buggy = 2; +#endif + c->x86_cache_size=16; /* Yep 16K integrated cache thats it */ + + /* GXm supports extended cpuid levels 'ala' AMD */ + if (c->cpuid_level == 2) { + get_model_name(c); /* get CPU marketing name */ + clear_bit(X86_FEATURE_TSC, c->x86_capability); + return; + } + else { /* MediaGX */ + Cx86_cb[2] = (dir0_lsn & 1) ? '3' : '4'; + p = Cx86_cb+2; + c->x86_model = (dir1 & 0x20) ? 1 : 2; + clear_bit(X86_FEATURE_TSC, &c->x86_capability); + } + break; + + case 5: /* 6x86MX/M II */ + if (dir1 > 7) + { + dir0_msn++; /* M II */ + /* Enable MMX extensions (App note 108) */ + setCx86(CX86_CCR7, getCx86(CX86_CCR7)|1); + } + else + { + c->coma_bug = 1; /* 6x86MX, it has the bug. */ + } + tmp = (!(dir0_lsn & 7) || dir0_lsn & 1) ? 2 : 0; + Cx86_cb[tmp] = cyrix_model_mult2[dir0_lsn & 7]; + p = Cx86_cb+tmp; + if (((dir1 & 0x0f) > 4) || ((dir1 & 0xf0) == 0x20)) + (c->x86_model)++; + /* Emulate MTRRs using Cyrix's ARRs. */ + set_bit(X86_FEATURE_CYRIX_ARR, &c->x86_capability); + break; + + case 0xf: /* Cyrix 486 without DEVID registers */ + switch (dir0_lsn) { + case 0xd: /* either a 486SLC or DLC w/o DEVID */ + dir0_msn = 0; + p = Cx486_name[(c->hard_math) ? 1 : 0]; + break; + + case 0xe: /* a 486S A step */ + dir0_msn = 0; + p = Cx486S_name[0]; + break; + } + break; + + default: /* unknown (shouldn't happen, we know everyone ;-) */ + dir0_msn = 7; + break; + } + strcpy(buf, Cx86_model[dir0_msn & 7]); + if (p) strcat(buf, p); + return; +} + +static void __init init_centaur(struct cpuinfo_x86 *c) +{ + enum { + ECX8=1<<1, + EIERRINT=1<<2, + DPM=1<<3, + DMCE=1<<4, + DSTPCLK=1<<5, + ELINEAR=1<<6, + DSMC=1<<7, + DTLOCK=1<<8, + EDCTLB=1<<8, + EMMX=1<<9, + DPDC=1<<11, + EBRPRED=1<<12, + DIC=1<<13, + DDC=1<<14, + DNA=1<<15, + ERETSTK=1<<16, + E2MMX=1<<19, + EAMD3D=1<<20, + }; + + char *name; + u32 fcr_set=0; + u32 fcr_clr=0; + u32 lo,hi,newlo; + u32 aa,bb,cc,dd; + + /* Bit 31 in normal CPUID used for nonstandard 3DNow ID; + 3DNow is IDd by bit 31 in extended CPUID (1*32+31) anyway */ + clear_bit(0*32+31, &c->x86_capability); + + switch (c->x86) { + + case 5: + switch(c->x86_model) { + case 4: + name="C6"; + fcr_set=ECX8|DSMC|EDCTLB|EMMX|ERETSTK; + fcr_clr=DPDC; + printk(KERN_NOTICE "Disabling bugged TSC.\n"); + clear_bit(X86_FEATURE_TSC, &c->x86_capability); + break; + case 8: + switch(c->x86_mask) { + default: + name="2"; + break; + case 7 ... 9: + name="2A"; + break; + case 10 ... 15: + name="2B"; + break; + } + fcr_set=ECX8|DSMC|DTLOCK|EMMX|EBRPRED|ERETSTK|E2MMX|EAMD3D; + fcr_clr=DPDC; + break; + case 9: + name="3"; + fcr_set=ECX8|DSMC|DTLOCK|EMMX|EBRPRED|ERETSTK|E2MMX|EAMD3D; + fcr_clr=DPDC; + break; + case 10: + name="4"; + /* no info on the WC4 yet */ + break; + default: + name="??"; + } + + /* get FCR */ + rdmsr(0x107, lo, hi); + + newlo=(lo|fcr_set) & (~fcr_clr); + + if (newlo!=lo) { + printk(KERN_INFO "Centaur FCR was 0x%X now 0x%X\n", lo, newlo ); + wrmsr(0x107, newlo, hi ); + } else { + printk(KERN_INFO "Centaur FCR is 0x%X\n",lo); + } + /* Emulate MTRRs using Centaur's MCR. */ + set_bit(X86_FEATURE_CENTAUR_MCR, &c->x86_capability); + /* Report CX8 */ + set_bit(X86_FEATURE_CX8, &c->x86_capability); + /* Set 3DNow! on Winchip 2 and above. */ + if (c->x86_model >=8) + set_bit(X86_FEATURE_3DNOW, &c->x86_capability); + /* See if we can find out some more. */ + if ( cpuid_eax(0x80000000) >= 0x80000005 ) { + /* Yes, we can. */ + cpuid(0x80000005,&aa,&bb,&cc,&dd); + /* Add L1 data and code cache sizes. */ + c->x86_cache_size = (cc>>24)+(dd>>24); + } + sprintf( c->x86_model_id, "WinChip %s", name ); + break; + + case 6: + switch (c->x86_model) { + case 6 ... 7: /* Cyrix III or C3 */ + rdmsr (0x1107, lo, hi); + lo |= (1<<1 | 1<<7); /* Report CX8 & enable PGE */ + wrmsr (0x1107, lo, hi); + + set_bit(X86_FEATURE_CX8, &c->x86_capability); + set_bit(X86_FEATURE_3DNOW, &c->x86_capability); + + get_model_name(c); + display_cacheinfo(c); + break; + } + break; + } + +} + + +static void __init init_transmeta(struct cpuinfo_x86 *c) +{ + unsigned int cap_mask, uk, max, dummy; + unsigned int cms_rev1, cms_rev2; + unsigned int cpu_rev, cpu_freq, cpu_flags; + char cpu_info[65]; + + get_model_name(c); /* Same as AMD/Cyrix */ + display_cacheinfo(c); + + /* Print CMS and CPU revision */ + max = cpuid_eax(0x80860000); + if ( max >= 0x80860001 ) { + cpuid(0x80860001, &dummy, &cpu_rev, &cpu_freq, &cpu_flags); + printk(KERN_INFO "CPU: Processor revision %u.%u.%u.%u, %u MHz\n", + (cpu_rev >> 24) & 0xff, + (cpu_rev >> 16) & 0xff, + (cpu_rev >> 8) & 0xff, + cpu_rev & 0xff, + cpu_freq); + } + if ( max >= 0x80860002 ) { + cpuid(0x80860002, &dummy, &cms_rev1, &cms_rev2, &dummy); + printk(KERN_INFO "CPU: Code Morphing Software revision %u.%u.%u-%u-%u\n", + (cms_rev1 >> 24) & 0xff, + (cms_rev1 >> 16) & 0xff, + (cms_rev1 >> 8) & 0xff, + cms_rev1 & 0xff, + cms_rev2); + } + if ( max >= 0x80860006 ) { + cpuid(0x80860003, + (void *)&cpu_info[0], + (void *)&cpu_info[4], + (void *)&cpu_info[8], + (void *)&cpu_info[12]); + cpuid(0x80860004, + (void *)&cpu_info[16], + (void *)&cpu_info[20], + (void *)&cpu_info[24], + (void *)&cpu_info[28]); + cpuid(0x80860005, + (void *)&cpu_info[32], + (void *)&cpu_info[36], + (void *)&cpu_info[40], + (void *)&cpu_info[44]); + cpuid(0x80860006, + (void *)&cpu_info[48], + (void *)&cpu_info[52], + (void *)&cpu_info[56], + (void *)&cpu_info[60]); + cpu_info[64] = '\0'; + printk(KERN_INFO "CPU: %s\n", cpu_info); + } + + /* Unhide possibly hidden capability flags */ + rdmsr(0x80860004, cap_mask, uk); + wrmsr(0x80860004, ~0, uk); + c->x86_capability[0] = cpuid_edx(0x00000001); + wrmsr(0x80860004, cap_mask, uk); +} + + +static void __init init_rise(struct cpuinfo_x86 *c) +{ + printk("CPU: Rise iDragon"); + if (c->x86_model > 2) + printk(" II"); + printk("\n"); + printk("If you have one of these please email davej@suse.de\n"); + + /* Unhide possibly hidden capability flags + The mp6 iDragon family don't have MSRs. + We switch on extra features with this cpuid weirdness: */ + __asm__ ( + "movl $0x6363452a, %%eax\n\t" + "movl $0x3231206c, %%ecx\n\t" + "movl $0x2a32313a, %%edx\n\t" + "cpuid\n\t" + "movl $0x63634523, %%eax\n\t" + "movl $0x32315f6c, %%ecx\n\t" + "movl $0x2333313a, %%edx\n\t" + "cpuid\n\t" : : : "eax", "ebx", "ecx", "edx" + ); + set_bit(X86_FEATURE_CX8, &c->x86_capability); +} + +static void __init init_intel(struct cpuinfo_x86 *c) +{ + static int f00f_workaround_enabled; + extern void trap_init_f00f_bug(void); + extern void mcheck_init(struct cpuinfo_x86 *c); + char *p = NULL; + unsigned int l1i = 0, l1d = 0, l2 = 0, l3 = 0; /* Cache sizes */ + + /* + * All current models of Pentium and Pentium with MMX technology CPUs + * have the F0 0F bug, which lets nonpriviledged users lock up the system. + * Note that the workaround only should be initialized once... + */ + c->f00f_bug = 0; + if ( c->x86 == 5 ) { + c->f00f_bug = 1; + if ( !f00f_workaround_enabled ) { + trap_init_f00f_bug(); + printk(KERN_NOTICE "Intel Pentium with F0 0F bug - workaround enabled.\n"); + f00f_workaround_enabled = 1; + } + } +#endif + + + if (c->cpuid_level > 1) { + /* supports eax=2 call */ + int i, j, n; + int regs[4]; + unsigned char *dp = (unsigned char *)regs; + + /* Number of times to iterate */ + n = cpuid_eax(2) & 0xFF; + + for ( i = 0 ; i < n ; i++ ) { + cpuid(2, ®s[0], ®s[1], ®s[2], ®s[3]); + + /* If bit 31 is set, this is an unknown format */ + for ( j = 0 ; j < 3 ; j++ ) { + if ( regs[j] < 0 ) regs[j] = 0; + } + + /* Byte 0 is level count, not a descriptor */ + for ( j = 1 ; j < 16 ; j++ ) { + unsigned char des = dp[j]; + unsigned char dl, dh; + unsigned int cs; + + dh = des >> 4; + dl = des & 0x0F; + + /* Black magic... */ + + switch ( dh ) + { + case 0: + switch ( dl ) { + case 6: + /* L1 I cache */ + l1i += 8; + break; + case 8: + /* L1 I cache */ + l1i += 16; + break; + case 10: + /* L1 D cache */ + l1d += 8; + break; + case 12: + /* L1 D cache */ + l1d += 16; + break; + default:; + /* TLB, or unknown */ + } + break; + case 2: + if ( dl ) { + /* L3 cache */ + cs = (dl-1) << 9; + l3 += cs; + } + break; + case 4: + if ( c->x86 > 6 && dl ) { + /* P4 family */ + /* L3 cache */ + cs = 128 << (dl-1); + l3 += cs; + break; + } + /* else same as 8 - fall through */ + case 8: + if ( dl ) { + /* L2 cache */ + cs = 128 << (dl-1); + l2 += cs; + } + break; + case 6: + if (dl > 5) { + /* L1 D cache */ + cs = 8<<(dl-6); + l1d += cs; + } + break; + case 7: + if ( dl >= 8 ) + { + /* L2 cache */ + cs = 64<<(dl-8); + l2 += cs; + } else { + /* L0 I cache, count as L1 */ + cs = dl ? (16 << (dl-1)) : 12; + l1i += cs; + } + break; + default: + /* TLB, or something else we don't know about */ + break; + } + } + } + if ( l1i || l1d ) + printk(KERN_INFO "CPU: L1 I cache: %dK, L1 D cache: %dK\n", + l1i, l1d); + if ( l2 ) + printk(KERN_INFO "CPU: L2 cache: %dK\n", l2); + if ( l3 ) + printk(KERN_INFO "CPU: L3 cache: %dK\n", l3); + + /* + * This assumes the L3 cache is shared; it typically lives in + * the northbridge. The L1 caches are included by the L2 + * cache, and so should not be included for the purpose of + * SMP switching weights. + */ + c->x86_cache_size = l2 ? l2 : (l1i+l1d); + } + + /* SEP CPUID bug: Pentium Pro reports SEP but doesn't have it */ + if ( c->x86 == 6 && c->x86_model < 3 && c->x86_mask < 3 ) + clear_bit(X86_FEATURE_SEP, &c->x86_capability); + + /* Names for the Pentium II/Celeron processors + detectable only by also checking the cache size. + Dixon is NOT a Celeron. */ + if (c->x86 == 6) { + switch (c->x86_model) { + case 5: + if (l2 == 0) + p = "Celeron (Covington)"; + if (l2 == 256) + p = "Mobile Pentium II (Dixon)"; + break; + + case 6: + if (l2 == 128) + p = "Celeron (Mendocino)"; + break; + + case 8: + if (l2 == 128) + p = "Celeron (Coppermine)"; + break; + } + } + + if ( p ) + strcpy(c->x86_model_id, p); + + /* Enable MCA if available */ + mcheck_init(c); +} + +void __init get_cpu_vendor(struct cpuinfo_x86 *c) +{ + char *v = c->x86_vendor_id; + + if (!strcmp(v, "GenuineIntel")) + c->x86_vendor = X86_VENDOR_INTEL; + else if (!strcmp(v, "AuthenticAMD")) + c->x86_vendor = X86_VENDOR_AMD; + else if (!strcmp(v, "CyrixInstead")) + c->x86_vendor = X86_VENDOR_CYRIX; + else if (!strcmp(v, "UMC UMC UMC ")) + c->x86_vendor = X86_VENDOR_UMC; + else if (!strcmp(v, "CentaurHauls")) + c->x86_vendor = X86_VENDOR_CENTAUR; + else if (!strcmp(v, "NexGenDriven")) + c->x86_vendor = X86_VENDOR_NEXGEN; + else if (!strcmp(v, "RiseRiseRise")) + c->x86_vendor = X86_VENDOR_RISE; + else if (!strcmp(v, "GenuineTMx86") || + !strcmp(v, "TransmetaCPU")) + c->x86_vendor = X86_VENDOR_TRANSMETA; + else + c->x86_vendor = X86_VENDOR_UNKNOWN; +} + +struct cpu_model_info { + int vendor; + int family; + char *model_names[16]; +}; + +/* Naming convention should be: [()] */ +/* This table only is used unless init_() below doesn't set it; */ +/* in particular, if CPUID levels 0x80000002..4 are supported, this isn't used */ +static struct cpu_model_info cpu_models[] __initdata = { + { X86_VENDOR_INTEL, 4, + { "486 DX-25/33", "486 DX-50", "486 SX", "486 DX/2", "486 SL", + "486 SX/2", NULL, "486 DX/2-WB", "486 DX/4", "486 DX/4-WB", NULL, + NULL, NULL, NULL, NULL, NULL }}, + { X86_VENDOR_INTEL, 5, + { "Pentium 60/66 A-step", "Pentium 60/66", "Pentium 75 - 200", + "OverDrive PODP5V83", "Pentium MMX", NULL, NULL, + "Mobile Pentium 75 - 200", "Mobile Pentium MMX", NULL, NULL, NULL, + NULL, NULL, NULL, NULL }}, + { X86_VENDOR_INTEL, 6, + { "Pentium Pro A-step", "Pentium Pro", NULL, "Pentium II (Klamath)", + NULL, "Pentium II (Deschutes)", "Mobile Pentium II", + "Pentium III (Katmai)", "Pentium III (Coppermine)", NULL, + "Pentium III (Cascades)", NULL, NULL, NULL, NULL }}, + { X86_VENDOR_AMD, 4, + { NULL, NULL, NULL, "486 DX/2", NULL, NULL, NULL, "486 DX/2-WB", + "486 DX/4", "486 DX/4-WB", NULL, NULL, NULL, NULL, "Am5x86-WT", + "Am5x86-WB" }}, + { X86_VENDOR_AMD, 5, /* Is this this really necessary?? */ + { "K5/SSA5", "K5", + "K5", "K5", NULL, NULL, + "K6", "K6", "K6-2", + "K6-3", NULL, NULL, NULL, NULL, NULL, NULL }}, + { X86_VENDOR_AMD, 6, /* Is this this really necessary?? */ + { "Athlon", "Athlon", + "Athlon", NULL, "Athlon", NULL, + NULL, NULL, NULL, + NULL, NULL, NULL, NULL, NULL, NULL, NULL }}, + { X86_VENDOR_UMC, 4, + { NULL, "U5D", "U5S", NULL, NULL, NULL, NULL, NULL, NULL, NULL, + NULL, NULL, NULL, NULL, NULL, NULL }}, + { X86_VENDOR_NEXGEN, 5, + { "Nx586", NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, + NULL, NULL, NULL, NULL, NULL, NULL, NULL }}, + { X86_VENDOR_RISE, 5, + { "iDragon", NULL, "iDragon", NULL, NULL, NULL, NULL, + NULL, "iDragon II", "iDragon II", NULL, NULL, NULL, NULL, NULL, NULL }}, +}; + +/* Look up CPU names by table lookup. */ +static char __init *table_lookup_model(struct cpuinfo_x86 *c) +{ + struct cpu_model_info *info = cpu_models; + int i; + + if ( c->x86_model >= 16 ) + return NULL; /* Range check */ + + for ( i = 0 ; i < sizeof(cpu_models)/sizeof(struct cpu_model_info) ; i++ ) { + if ( info->vendor == c->x86_vendor && + info->family == c->x86 ) { + return info->model_names[c->x86_model]; + } + info++; + } + return NULL; /* Not found */ +} + +/* + * Detect a NexGen CPU running without BIOS hypercode new enough + * to have CPUID. (Thanks to Herbert Oppmann) + */ + +static int __init deep_magic_nexgen_probe(void) +{ + int ret; + + __asm__ __volatile__ ( + " movw $0x5555, %%ax\n" + " xorw %%dx,%%dx\n" + " movw $2, %%cx\n" + " divw %%cx\n" + " movl $0, %%eax\n" + " jnz 1f\n" + " movl $1, %%eax\n" + "1:\n" + : "=a" (ret) : : "cx", "dx" ); + return ret; +} + +static void __init squash_the_stupid_serial_number(struct cpuinfo_x86 *c) +{ + if( test_bit(X86_FEATURE_PN, &c->x86_capability) && + disable_x86_serial_nr ) { + /* Disable processor serial number */ + unsigned long lo,hi; + rdmsr(0x119,lo,hi); + lo |= 0x200000; + wrmsr(0x119,lo,hi); + printk(KERN_NOTICE "CPU serial number disabled.\n"); + clear_bit(X86_FEATURE_PN, &c->x86_capability); + + /* Disabling the serial number may affect the cpuid level */ + c->cpuid_level = cpuid_eax(0); + } +} + + +int __init x86_serial_nr_setup(char *s) +{ + disable_x86_serial_nr = 0; + return 1; +} +__setup("serialnumber", x86_serial_nr_setup); + +int __init x86_fxsr_setup(char * s) +{ + disable_x86_fxsr = 1; + return 1; +} +__setup("nofxsr", x86_fxsr_setup); + + +/* Standard macro to see if a specific flag is changeable */ +static inline int flag_is_changeable_p(u32 flag) +{ + u32 f1, f2; + + asm("pushfl\n\t" + "pushfl\n\t" + "popl %0\n\t" + "movl %0,%1\n\t" + "xorl %2,%0\n\t" + "pushl %0\n\t" + "popfl\n\t" + "pushfl\n\t" + "popl %0\n\t" + "popfl\n\t" + : "=&r" (f1), "=&r" (f2) + : "ir" (flag)); + + return ((f1^f2) & flag) != 0; +} + + +/* Probe for the CPUID instruction */ +static int __init have_cpuid_p(void) +{ + return flag_is_changeable_p(X86_EFLAGS_ID); +} + +/* + * Cyrix CPUs without cpuid or with cpuid not yet enabled can be detected + * by the fact that they preserve the flags across the division of 5/2. + * PII and PPro exhibit this behavior too, but they have cpuid available. + */ + +/* + * Perform the Cyrix 5/2 test. A Cyrix won't change + * the flags, while other 486 chips will. + */ +static inline int test_cyrix_52div(void) +{ + unsigned int test; + + __asm__ __volatile__( + "sahf\n\t" /* clear flags (%eax = 0x0005) */ + "div %b2\n\t" /* divide 5 by 2 */ + "lahf" /* store flags into %ah */ + : "=a" (test) + : "0" (5), "q" (2) + : "cc"); + + /* AH is 0x02 on Cyrix after the divide.. */ + return (unsigned char) (test >> 8) == 0x02; +} + +/* Try to detect a CPU with disabled CPUID, and if so, enable. This routine + may also be used to detect non-CPUID processors and fill in some of + the information manually. */ +static int __init id_and_try_enable_cpuid(struct cpuinfo_x86 *c) +{ + /* First of all, decide if this is a 486 or higher */ + /* It's a 486 if we can modify the AC flag */ + if ( flag_is_changeable_p(X86_EFLAGS_AC) ) + c->x86 = 4; + else + c->x86 = 3; + + /* Detect Cyrix with disabled CPUID */ + if ( c->x86 == 4 && test_cyrix_52div() ) { + unsigned char dir0, dir1; + + strcpy(c->x86_vendor_id, "CyrixInstead"); + c->x86_vendor = X86_VENDOR_CYRIX; + + /* Actually enable cpuid on the older cyrix */ + + /* Retrieve CPU revisions */ + + do_cyrix_devid(&dir0, &dir1); + + dir0>>=4; + + /* Check it is an affected model */ + + if (dir0 == 5 || dir0 == 3) + { + unsigned char ccr3, ccr4; + unsigned long flags; + + printk(KERN_INFO "Enabling CPUID on Cyrix processor.\n"); + local_irq_save(flags); + ccr3 = getCx86(CX86_CCR3); + setCx86(CX86_CCR3, (ccr3 & 0x0f) | 0x10); /* enable MAPEN */ + ccr4 = getCx86(CX86_CCR4); + setCx86(CX86_CCR4, ccr4 | 0x80); /* enable cpuid */ + setCx86(CX86_CCR3, ccr3); /* disable MAPEN */ + local_irq_restore(flags); + } + } else + + /* Detect NexGen with old hypercode */ + if ( deep_magic_nexgen_probe() ) { + strcpy(c->x86_vendor_id, "NexGenDriven"); + } + + return have_cpuid_p(); /* Check to see if CPUID now enabled? */ +} + +/* + * This does the hard work of actually picking apart the CPU stuff... + */ +void __init identify_cpu(struct cpuinfo_x86 *c) +{ + int junk, i; + u32 xlvl, tfms; + + c->loops_per_jiffy = loops_per_jiffy; + c->x86_cache_size = -1; + c->x86_vendor = X86_VENDOR_UNKNOWN; + c->cpuid_level = -1; /* CPUID not detected */ + c->x86_model = c->x86_mask = 0; /* So far unknown... */ + c->x86_vendor_id[0] = '\0'; /* Unset */ + c->x86_model_id[0] = '\0'; /* Unset */ + memset(&c->x86_capability, 0, sizeof c->x86_capability); + + if ( !have_cpuid_p() && !id_and_try_enable_cpuid(c) ) { + /* CPU doesn't have CPUID */ + + /* If there are any capabilities, they're vendor-specific */ + /* enable_cpuid() would have set c->x86 for us. */ + } else { + /* CPU does have CPUID */ + + /* Get vendor name */ + cpuid(0x00000000, &c->cpuid_level, + (int *)&c->x86_vendor_id[0], + (int *)&c->x86_vendor_id[8], + (int *)&c->x86_vendor_id[4]); + + get_cpu_vendor(c); + /* Initialize the standard set of capabilities */ + /* Note that the vendor-specific code below might override */ + + /* Intel-defined flags: level 0x00000001 */ + if ( c->cpuid_level >= 0x00000001 ) { + cpuid(0x00000001, &tfms, &junk, &junk, + &c->x86_capability[0]); + c->x86 = (tfms >> 8) & 15; + c->x86_model = (tfms >> 4) & 15; + c->x86_mask = tfms & 15; + } else { + /* Have CPUID level 0 only - unheard of */ + c->x86 = 4; + } + + /* AMD-defined flags: level 0x80000001 */ + xlvl = cpuid_eax(0x80000000); + if ( (xlvl & 0xffff0000) == 0x80000000 ) { + if ( xlvl >= 0x80000001 ) + c->x86_capability[1] = cpuid_edx(0x80000001); + if ( xlvl >= 0x80000004 ) + get_model_name(c); /* Default name */ + } + + /* Transmeta-defined flags: level 0x80860001 */ + xlvl = cpuid_eax(0x80860000); + if ( (xlvl & 0xffff0000) == 0x80860000 ) { + if ( xlvl >= 0x80860001 ) + c->x86_capability[2] = cpuid_edx(0x80860001); + } + } + + printk(KERN_DEBUG "CPU: Before vendor init, caps: %08x %08x %08x, vendor = %d\n", + c->x86_capability[0], + c->x86_capability[1], + c->x86_capability[2], + c->x86_vendor); + + /* + * Vendor-specific initialization. In this section we + * canonicalize the feature flags, meaning if there are + * features a certain CPU supports which CPUID doesn't + * tell us, CPUID claiming incorrect flags, or other bugs, + * we handle them here. + * + * At the end of this section, c->x86_capability better + * indicate the features this CPU genuinely supports! + */ + switch ( c->x86_vendor ) { + case X86_VENDOR_UNKNOWN: + default: + /* Not much we can do here... */ + /* Check if at least it has cpuid */ + if (c->cpuid_level == -1) + { + /* No cpuid. It must be an ancient CPU */ + if (c->x86 == 4) + strcpy(c->x86_model_id, "486"); + else if (c->x86 == 3) + strcpy(c->x86_model_id, "386"); + } + break; + + case X86_VENDOR_CYRIX: + init_cyrix(c); + break; + + case X86_VENDOR_AMD: + init_amd(c); + break; + + case X86_VENDOR_CENTAUR: + init_centaur(c); + break; + + case X86_VENDOR_INTEL: + init_intel(c); + break; + + case X86_VENDOR_NEXGEN: + c->x86_cache_size = 256; /* A few had 1 MB... */ + break; + + case X86_VENDOR_TRANSMETA: + init_transmeta(c); + break; + + case X86_VENDOR_RISE: + init_rise(c); + break; + } + + printk(KERN_DEBUG "CPU: After vendor init, caps: %08x %08x %08x %08x\n", + c->x86_capability[0], + c->x86_capability[1], + c->x86_capability[2], + c->x86_capability[3]); + + /* + * The vendor-specific functions might have changed features. Now + * we do "generic changes." + */ + + /* TSC disabled? */ +#ifndef CONFIG_X86_TSC + if ( tsc_disable ) + clear_bit(X86_FEATURE_TSC, &c->x86_capability); +#endif + + /* FXSR disabled? */ + if (disable_x86_fxsr) { + clear_bit(X86_FEATURE_FXSR, &c->x86_capability); + clear_bit(X86_FEATURE_XMM, &c->x86_capability); + } + + /* Disable the PN if appropriate */ + squash_the_stupid_serial_number(c); + + /* If the model name is still unset, do table lookup. */ + if ( !c->x86_model_id[0] ) { + char *p; + p = table_lookup_model(c); + if ( p ) + strcpy(c->x86_model_id, p); + else + /* Last resort... */ + sprintf(c->x86_model_id, "%02x/%02x", + c->x86_vendor, c->x86_model); + } + + /* Now the feature flags better reflect actual CPU features! */ + + printk(KERN_DEBUG "CPU: After generic, caps: %08x %08x %08x %08x\n", + c->x86_capability[0], + c->x86_capability[1], + c->x86_capability[2], + c->x86_capability[3]); + + /* + * On SMP, boot_cpu_data holds the common feature set between + * all CPUs; so make sure that we indicate which features are + * common between the CPUs. The first time this routine gets + * executed, c == &boot_cpu_data. + */ + if ( c != &boot_cpu_data ) { + /* AND the already accumulated flags with these */ + for ( i = 0 ; i < NCAPINTS ; i++ ) + boot_cpu_data.x86_capability[i] &= c->x86_capability[i]; + } + + printk(KERN_DEBUG "CPU: Common caps: %08x %08x %08x %08x\n", + boot_cpu_data.x86_capability[0], + boot_cpu_data.x86_capability[1], + boot_cpu_data.x86_capability[2], + boot_cpu_data.x86_capability[3]); +} +/* + * Perform early boot up checks for a valid TSC. See arch/i386/kernel/time.c + */ + +void __init dodgy_tsc(void) +{ + get_cpu_vendor(&boot_cpu_data); + + if ( boot_cpu_data.x86_vendor == X86_VENDOR_CYRIX ) + init_cyrix(&boot_cpu_data); +} + + +/* These need to match */ +static char *cpu_vendor_names[] __initdata = { + "Intel", "Cyrix", "AMD", "UMC", "NexGen", "Centaur", "Rise", "Transmeta" }; + + +void __init print_cpu_info(struct cpuinfo_x86 *c) +{ + char *vendor = NULL; + + if (c->x86_vendor < sizeof(cpu_vendor_names)/sizeof(char *)) + vendor = cpu_vendor_names[c->x86_vendor]; + else if (c->cpuid_level >= 0) + vendor = c->x86_vendor_id; + + if (vendor && strncmp(c->x86_model_id, vendor, strlen(vendor))) + printk("%s ", vendor); + + if (!c->x86_model_id[0]) + printk("%d86", c->x86); + else + printk("%s", c->x86_model_id); + + if (c->x86_mask || c->cpuid_level >= 0) + printk(" stepping %02x\n", c->x86_mask); + else + printk("\n"); +} + +/* + * Get CPU information for use by the procfs. + */ + +int get_cpuinfo(char * buffer) +{ + char *p = buffer; + + /* + * These flag bits must match the definitions in . + * NULL means this bit is undefined or reserved; either way it doesn't + * have meaning as far as Linux is concerned. Note that it's important + * to realize there is a difference between this table and CPUID -- if + * applications want to get the raw CPUID data, they should access + * /dev/cpu//cpuid instead. + */ + static char *x86_cap_flags[] = { + /* Intel-defined */ + "fpu", "vme", "de", "pse", "tsc", "msr", "pae", "mce", + "cx8", "apic", NULL, "sep", "mtrr", "pge", "mca", "cmov", + "pat", "pse36", "pn", "clflush", NULL, "dts", "acpi", "mmx", + "fxsr", "sse", "sse2", "ss", NULL, "tm", "ia64", NULL, + + /* AMD-defined */ + NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, + NULL, NULL, NULL, "syscall", NULL, NULL, NULL, NULL, + NULL, NULL, NULL, NULL, NULL, NULL, "mmxext", NULL, + NULL, NULL, NULL, NULL, NULL, "lm", "3dnowext", "3dnow", + + /* Transmeta-defined */ + "recovery", "longrun", NULL, "lrti", NULL, NULL, NULL, NULL, + NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, + NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, + NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, + + /* Other (Linux-defined) */ + "cxmmx", "k6_mtrr", "cyrix_arr", "centaur_mcr", NULL, NULL, NULL, NULL, + NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, + NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, + NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, + }; + struct cpuinfo_x86 *c = cpu_data; + int i, n; + + for (n = 0; n < NR_CPUS; n++, c++) { + int fpu_exception; +#ifdef CONFIG_SMP + if (!(cpu_online_map & (1<x86_vendor_id[0] ? c->x86_vendor_id : "unknown", + c->x86, + c->x86_model, + c->x86_model_id[0] ? c->x86_model_id : "unknown"); + + if (c->x86_mask || c->cpuid_level >= 0) + p += sprintf(p, "stepping\t: %d\n", c->x86_mask); + else + p += sprintf(p, "stepping\t: unknown\n"); + + if ( test_bit(X86_FEATURE_TSC, &c->x86_capability) ) { + p += sprintf(p, "cpu MHz\t\t: %lu.%03lu\n", + cpu_khz / 1000, (cpu_khz % 1000)); + } + + /* Cache size */ + if (c->x86_cache_size >= 0) + p += sprintf(p, "cache size\t: %d KB\n", c->x86_cache_size); + + /* We use exception 16 if we have hardware math and we've either seen it or the CPU claims it is internal */ + fpu_exception = c->hard_math && (ignore_irq13 || cpu_has_fpu); + p += sprintf(p, "fdiv_bug\t: %s\n" + "hlt_bug\t\t: %s\n" + "f00f_bug\t: %s\n" + "coma_bug\t: %s\n" + "fpu\t\t: %s\n" + "fpu_exception\t: %s\n" + "cpuid level\t: %d\n" + "wp\t\t: %s\n" + "flags\t\t:", + c->fdiv_bug ? "yes" : "no", + c->hlt_works_ok ? "no" : "yes", + c->f00f_bug ? "yes" : "no", + c->coma_bug ? "yes" : "no", + c->hard_math ? "yes" : "no", + fpu_exception ? "yes" : "no", + c->cpuid_level, + c->wp_works_ok ? "yes" : "no"); + + for ( i = 0 ; i < 32*NCAPINTS ; i++ ) + if ( test_bit(i, &c->x86_capability) && + x86_cap_flags[i] != NULL ) + p += sprintf(p, " %s", x86_cap_flags[i]); + + p += sprintf(p, "\nbogomips\t: %lu.%02lu\n\n", + c->loops_per_jiffy/(500000/HZ), + (c->loops_per_jiffy/(5000/HZ)) % 100); + } + return p - buffer; +} + +static unsigned long cpu_initialized __initdata = 0; + +/* + * cpu_init() initializes state that is per-CPU. Some data is already + * initialized (naturally) in the bootstrap process, such as the GDT + * and IDT. We reload them nevertheless, this function acts as a + * 'CPU state barrier', nothing should get across. + */ +void __init cpu_init (void) +{ + int nr = smp_processor_id(); + struct tss_struct * t = &init_tss[nr]; + + if (test_and_set_bit(nr, &cpu_initialized)) { + printk(KERN_WARNING "CPU#%d already initialized!\n", nr); + for (;;) __sti(); + } + printk(KERN_INFO "Initializing CPU#%d\n", nr); + + if (cpu_has_vme || cpu_has_tsc || cpu_has_de) + clear_in_cr4(X86_CR4_VME|X86_CR4_PVI|X86_CR4_TSD|X86_CR4_DE); +#ifndef CONFIG_X86_TSC + if (tsc_disable && cpu_has_tsc) { + printk(KERN_NOTICE "Disabling TSC...\n"); + /**** FIX-HPA: DOES THIS REALLY BELONG HERE? ****/ + clear_bit(X86_FEATURE_TSC, boot_cpu_data.x86_capability); + set_in_cr4(X86_CR4_TSD); + } +#endif + + __asm__ __volatile__("lgdt %0": "=m" (gdt_descr)); + __asm__ __volatile__("lidt %0": "=m" (idt_descr)); + + /* + * Delete NT + */ + __asm__("pushfl ; andl $0xffffbfff,(%esp) ; popfl"); + + /* + * set up and load the per-CPU TSS and LDT + */ + atomic_inc(&init_mm.mm_count); + current->active_mm = &init_mm; + if(current->mm) + BUG(); + enter_lazy_tlb(&init_mm, current, nr); + + t->esp0 = current->thread.esp0; + set_tss_desc(nr,t); + gdt_table[__TSS(nr)].b &= 0xfffffdff; + load_TR(nr); + load_LDT(&init_mm); + + /* + * Clear all 6 debug registers: + */ + +#define CD(register) __asm__("movl %0,%%db" #register ::"r"(0) ); + + CD(0); CD(1); CD(2); CD(3); /* no db4 and db5 */; CD(6); CD(7); + +#undef CD + + /* + * Force FPU initialization: + */ + current->flags &= ~PF_USEDFPU; + current->used_math = 0; + stts(); +} + +/* + * Local Variables: + * mode:c + * c-file-style:"k&r" + * c-basic-offset:8 + * End: + */ diff -urpN linux-2.4.9-linus/arch/i386/kernel/smp.c linux-2.4.9-larpage/arch/i386/kernel/smp.c --- linux-2.4.9-linus/arch/i386/kernel/smp.c 2001-08-12 10:38:47.000000000 -0700 +++ linux-2.4.9-larpage/arch/i386/kernel/smp.c 2002-11-20 02:02:21.000000000 -0800 @@ -212,9 +212,8 @@ static inline void send_IPI_mask(int mas static volatile unsigned long flush_cpumask; static struct mm_struct * flush_mm; -static unsigned long flush_va; +static unsigned long flush_start, flush_end; static spinlock_t tlbstate_lock = SPIN_LOCK_UNLOCKED; -#define FLUSH_ALL 0xffffffff /* * We cannot call mmdrop() because we are in interrupt context, @@ -287,14 +286,13 @@ asmlinkage void smp_invalidate_interrupt * * BUG(); */ - - if (flush_mm == cpu_tlbstate[cpu].active_mm) { - if (cpu_tlbstate[cpu].state == TLBSTATE_OK) { - if (flush_va == FLUSH_ALL) - local_flush_tlb(); - else - __flush_tlb_one(flush_va); - } else + + if (flush_mm == NULL) + __flush_tlb_ones(flush_start, flush_end); + else if (flush_mm == cpu_tlbstate[cpu].active_mm) { + if (cpu_tlbstate[cpu].state == TLBSTATE_OK) + __flush_tlb_range(flush_start, flush_end); + else leave_mm(cpu); } ack_APIC_irq(); @@ -302,34 +300,20 @@ asmlinkage void smp_invalidate_interrupt } static void flush_tlb_others (unsigned long cpumask, struct mm_struct *mm, - unsigned long va) + unsigned long start, unsigned long end) { /* - * A couple of (to be removed) sanity checks: - * - * - we do not send IPIs to not-yet booted CPUs. - * - current CPU must not be in mask - * - mask must exist :) - */ - if (!cpumask) - BUG(); - if ((cpumask & cpu_online_map) != cpumask) - BUG(); - if (cpumask & (1 << smp_processor_id())) - BUG(); - if (!mm) - BUG(); - - /* * i'm not happy about this global shared spinlock in the * MM hot path, but we'll see how contended it is. * Temporarily this turns IRQs off, so that lockups are * detected by the NMI watchdog. */ spin_lock(&tlbstate_lock); - + flush_mm = mm; - flush_va = va; + flush_start = start; + flush_end = end; + atomic_set_mask(cpumask, &flush_cpumask); /* * We have to send the IPI only to @@ -340,19 +324,17 @@ static void flush_tlb_others (unsigned l while (flush_cpumask) /* nothing. lockup detection does not belong here */; - flush_mm = NULL; - flush_va = 0; spin_unlock(&tlbstate_lock); } - -void flush_tlb_current_task(void) + +void flush_tlb(void) { struct mm_struct *mm = current->mm; unsigned long cpu_mask = mm->cpu_vm_mask & ~(1 << smp_processor_id()); local_flush_tlb(); if (cpu_mask) - flush_tlb_others(cpu_mask, mm, FLUSH_ALL); + flush_tlb_others(cpu_mask, mm, 0, PAGE_OFFSET); } void flush_tlb_mm (struct mm_struct * mm) @@ -366,7 +348,7 @@ void flush_tlb_mm (struct mm_struct * mm leave_mm(smp_processor_id()); } if (cpu_mask) - flush_tlb_others(cpu_mask, mm, FLUSH_ALL); + flush_tlb_others(cpu_mask, mm, 0, PAGE_OFFSET); } void flush_tlb_page(struct vm_area_struct * vma, unsigned long va) @@ -375,14 +357,27 @@ void flush_tlb_page(struct vm_area_struc unsigned long cpu_mask = mm->cpu_vm_mask & ~(1 << smp_processor_id()); if (current->active_mm == mm) { - if(current->mm) + if (current->mm) __flush_tlb_one(va); - else + else leave_mm(smp_processor_id()); } + if (cpu_mask) + flush_tlb_others(cpu_mask, mm, va, va + 1); +} + +void flush_tlb_range(struct mm_struct *mm, unsigned long start, unsigned long end) +{ + unsigned long cpu_mask = mm->cpu_vm_mask & ~(1 << smp_processor_id()); + if (current->active_mm == mm) { + if (current->mm) + __flush_tlb_range(start, end); + else + leave_mm(smp_processor_id()); + } if (cpu_mask) - flush_tlb_others(cpu_mask, mm, va); + flush_tlb_others(cpu_mask, mm, start, end); } static inline void do_flush_tlb_all_local(void) @@ -406,6 +401,22 @@ void flush_tlb_all(void) do_flush_tlb_all_local(); } +void flush_tlb_range_k(unsigned long start, unsigned long end) +{ + unsigned long cpu_mask; + + if (end - start <= MAX_FLUSH_TLB_RANGE) { + end = MMUPAGE_ALIGN(end); + cpu_mask = cpu_online_map & ~(1 << smp_processor_id()); + __flush_tlb_ones(start, end); + if (cpu_mask) + flush_tlb_others(cpu_mask, NULL, start, end); + } else { + smp_call_function(flush_tlb_all_ipi, 0, 1, 1); + do_flush_tlb_all_local(); + } +} + /* * this function sends a 'reschedule' IPI to another CPU. * it goes straight through and wastes no time serializing diff -urpN linux-2.4.9-linus/arch/i386/kernel/smp.c.orig linux-2.4.9-larpage/arch/i386/kernel/smp.c.orig --- linux-2.4.9-linus/arch/i386/kernel/smp.c.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/arch/i386/kernel/smp.c.orig 2002-11-20 02:02:21.000000000 -0800 @@ -0,0 +1,537 @@ +/* + * Intel SMP support routines. + * + * (c) 1995 Alan Cox, Building #3 + * (c) 1998-99, 2000 Ingo Molnar + * + * This code is released under the GNU General Public License version 2 or + * later. + */ + +#include + +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +/* + * Some notes on x86 processor bugs affecting SMP operation: + * + * Pentium, Pentium Pro, II, III (and all CPUs) have bugs. + * The Linux implications for SMP are handled as follows: + * + * Pentium III / [Xeon] + * None of the E1AP-E3AP errata are visible to the user. + * + * E1AP. see PII A1AP + * E2AP. see PII A2AP + * E3AP. see PII A3AP + * + * Pentium II / [Xeon] + * None of the A1AP-A3AP errata are visible to the user. + * + * A1AP. see PPro 1AP + * A2AP. see PPro 2AP + * A3AP. see PPro 7AP + * + * Pentium Pro + * None of 1AP-9AP errata are visible to the normal user, + * except occasional delivery of 'spurious interrupt' as trap #15. + * This is very rare and a non-problem. + * + * 1AP. Linux maps APIC as non-cacheable + * 2AP. worked around in hardware + * 3AP. fixed in C0 and above steppings microcode update. + * Linux does not use excessive STARTUP_IPIs. + * 4AP. worked around in hardware + * 5AP. symmetric IO mode (normal Linux operation) not affected. + * 'noapic' mode has vector 0xf filled out properly. + * 6AP. 'noapic' mode might be affected - fixed in later steppings + * 7AP. We do not assume writes to the LVT deassering IRQs + * 8AP. We do not enable low power mode (deep sleep) during MP bootup + * 9AP. We do not use mixed mode + * + * Pentium + * There is a marginal case where REP MOVS on 100MHz SMP + * machines with B stepping processors can fail. XXX should provide + * an L1cache=Writethrough or L1cache=off option. + * + * B stepping CPUs may hang. There are hardware work arounds + * for this. We warn about it in case your board doesnt have the work + * arounds. Basically thats so I can tell anyone with a B stepping + * CPU and SMP problems "tough". + * + * Specific items [From Pentium Processor Specification Update] + * + * 1AP. Linux doesn't use remote read + * 2AP. Linux doesn't trust APIC errors + * 3AP. We work around this + * 4AP. Linux never generated 3 interrupts of the same priority + * to cause a lost local interrupt. + * 5AP. Remote read is never used + * 6AP. not affected - worked around in hardware + * 7AP. not affected - worked around in hardware + * 8AP. worked around in hardware - we get explicit CS errors if not + * 9AP. only 'noapic' mode affected. Might generate spurious + * interrupts, we log only the first one and count the + * rest silently. + * 10AP. not affected - worked around in hardware + * 11AP. Linux reads the APIC between writes to avoid this, as per + * the documentation. Make sure you preserve this as it affects + * the C stepping chips too. + * 12AP. not affected - worked around in hardware + * 13AP. not affected - worked around in hardware + * 14AP. we always deassert INIT during bootup + * 15AP. not affected - worked around in hardware + * 16AP. not affected - worked around in hardware + * 17AP. not affected - worked around in hardware + * 18AP. not affected - worked around in hardware + * 19AP. not affected - worked around in BIOS + * + * If this sounds worrying believe me these bugs are either ___RARE___, + * or are signal timing bugs worked around in hardware and there's + * about nothing of note with C stepping upwards. + */ + +/* The 'big kernel lock' */ +spinlock_t kernel_flag = SPIN_LOCK_UNLOCKED; + +struct tlb_state cpu_tlbstate[NR_CPUS] = {[0 ... NR_CPUS-1] = { &init_mm, 0 }}; + +/* + * the following functions deal with sending IPIs between CPUs. + * + * We use 'broadcast', CPU->CPU IPIs and self-IPIs too. + */ + +static inline int __prepare_ICR (unsigned int shortcut, int vector) +{ + return APIC_DM_FIXED | shortcut | vector | APIC_DEST_LOGICAL; +} + +static inline int __prepare_ICR2 (unsigned int mask) +{ + return SET_APIC_DEST_FIELD(mask); +} + +static inline void __send_IPI_shortcut(unsigned int shortcut, int vector) +{ + /* + * Subtle. In the case of the 'never do double writes' workaround + * we have to lock out interrupts to be safe. As we don't care + * of the value read we use an atomic rmw access to avoid costly + * cli/sti. Otherwise we use an even cheaper single atomic write + * to the APIC. + */ + unsigned int cfg; + + /* + * Wait for idle. + */ + apic_wait_icr_idle(); + + /* + * No need to touch the target chip field + */ + cfg = __prepare_ICR(shortcut, vector); + + /* + * Send the IPI. The write to APIC_ICR fires this off. + */ + apic_write_around(APIC_ICR, cfg); +} + +static inline void send_IPI_allbutself(int vector) +{ + /* + * if there are no other CPUs in the system then + * we get an APIC send error if we try to broadcast. + * thus we have to avoid sending IPIs in this case. + */ + if (smp_num_cpus > 1) + __send_IPI_shortcut(APIC_DEST_ALLBUT, vector); +} + +static inline void send_IPI_all(int vector) +{ + __send_IPI_shortcut(APIC_DEST_ALLINC, vector); +} + +void send_IPI_self(int vector) +{ + __send_IPI_shortcut(APIC_DEST_SELF, vector); +} + +static inline void send_IPI_mask(int mask, int vector) +{ + unsigned long cfg; + unsigned long flags; + + __save_flags(flags); + __cli(); + + /* + * Wait for idle. + */ + apic_wait_icr_idle(); + + /* + * prepare target chip field + */ + cfg = __prepare_ICR2(mask); + apic_write_around(APIC_ICR2, cfg); + + /* + * program the ICR + */ + cfg = __prepare_ICR(0, vector); + + /* + * Send the IPI. The write to APIC_ICR fires this off. + */ + apic_write_around(APIC_ICR, cfg); + __restore_flags(flags); +} + +/* + * Smarter SMP flushing macros. + * c/o Linus Torvalds. + * + * These mean you can really definitely utterly forget about + * writing to user space from interrupts. (Its not allowed anyway). + * + * Optimizations Manfred Spraul + */ + +static volatile unsigned long flush_cpumask; +static struct mm_struct * flush_mm; +static unsigned long flush_start, flush_end; +static spinlock_t tlbstate_lock = SPIN_LOCK_UNLOCKED; + +/* + * We cannot call mmdrop() because we are in interrupt context, + * instead update mm->cpu_vm_mask. + */ +static void inline leave_mm (unsigned long cpu) +{ + if (cpu_tlbstate[cpu].state == TLBSTATE_OK) + BUG(); + clear_bit(cpu, &cpu_tlbstate[cpu].active_mm->cpu_vm_mask); +} + +/* + * + * The flush IPI assumes that a thread switch happens in this order: + * [cpu0: the cpu that switches] + * 1) switch_mm() either 1a) or 1b) + * 1a) thread switch to a different mm + * 1a1) clear_bit(cpu, &old_mm->cpu_vm_mask); + * Stop ipi delivery for the old mm. This is not synchronized with + * the other cpus, but smp_invalidate_interrupt ignore flush ipis + * for the wrong mm, and in the worst case we perform a superflous + * tlb flush. + * 1a2) set cpu_tlbstate to TLBSTATE_OK + * Now the smp_invalidate_interrupt won't call leave_mm if cpu0 + * was in lazy tlb mode. + * 1a3) update cpu_tlbstate[].active_mm + * Now cpu0 accepts tlb flushes for the new mm. + * 1a4) set_bit(cpu, &new_mm->cpu_vm_mask); + * Now the other cpus will send tlb flush ipis. + * 1a4) change cr3. + * 1b) thread switch without mm change + * cpu_tlbstate[].active_mm is correct, cpu0 already handles + * flush ipis. + * 1b1) set cpu_tlbstate to TLBSTATE_OK + * 1b2) test_and_set the cpu bit in cpu_vm_mask. + * Atomically set the bit [other cpus will start sending flush ipis], + * and test the bit. + * 1b3) if the bit was 0: leave_mm was called, flush the tlb. + * 2) switch %%esp, ie current + * + * The interrupt must handle 2 special cases: + * - cr3 is changed before %%esp, ie. it cannot use current->{active_,}mm. + * - the cpu performs speculative tlb reads, i.e. even if the cpu only + * runs in kernel space, the cpu could load tlb entries for user space + * pages. + * + * The good news is that cpu_tlbstate is local to each cpu, no + * write/read ordering problems. + */ + +/* + * TLB flush IPI: + * + * 1) Flush the tlb entries if the cpu uses the mm that's being flushed. + * 2) Leave the mm if we are in the lazy tlb mode. + */ + +asmlinkage void smp_invalidate_interrupt (void) +{ + unsigned long cpu = smp_processor_id(); + + if (!test_bit(cpu, &flush_cpumask)) + return; + /* + * This was a BUG() but until someone can quote me the + * line from the intel manual that guarantees an IPI to + * multiple CPUs is retried _only_ on the erroring CPUs + * its staying as a return + * + * BUG(); + */ + + if (flush_mm == NULL) + __flush_tlb_ones(flush_start, flush_end); + else if (flush_mm == cpu_tlbstate[cpu].active_mm) { + if (cpu_tlbstate[cpu].state == TLBSTATE_OK) + __flush_tlb_range(flush_start, flush_end); + else + leave_mm(cpu); + } + ack_APIC_irq(); + clear_bit(cpu, &flush_cpumask); +} + +static void flush_tlb_others (unsigned long cpumask, struct mm_struct *mm, + unsigned long start, unsigned long end) +{ + /* + * i'm not happy about this global shared spinlock in the + * MM hot path, but we'll see how contended it is. + * Temporarily this turns IRQs off, so that lockups are + * detected by the NMI watchdog. + */ + spin_lock(&tlbstate_lock); + + flush_mm = mm; + flush_start = start; + flush_end = end; + + atomic_set_mask(cpumask, &flush_cpumask); + /* + * We have to send the IPI only to + * CPUs affected. + */ + send_IPI_mask(cpumask, INVALIDATE_TLB_VECTOR); + + while (flush_cpumask) + /* nothing. lockup detection does not belong here */; + + spin_unlock(&tlbstate_lock); +} + +void flush_tlb(void) +{ + struct mm_struct *mm = current->mm; + unsigned long cpu_mask = mm->cpu_vm_mask & ~(1 << smp_processor_id()); + + local_flush_tlb(); + if (cpu_mask) + flush_tlb_others(cpu_mask, mm, 0, PAGE_OFFSET); +} + +void flush_tlb_mm (struct mm_struct * mm) +{ + unsigned long cpu_mask = mm->cpu_vm_mask & ~(1 << smp_processor_id()); + + if (current->active_mm == mm) { + if (current->mm) + local_flush_tlb(); + else + leave_mm(smp_processor_id()); + } + if (cpu_mask) + flush_tlb_others(cpu_mask, mm, 0, PAGE_OFFSET); +} + +void flush_tlb_page(struct vm_area_struct * vma, unsigned long va) +{ + struct mm_struct *mm = vma->vm_mm; + unsigned long cpu_mask = mm->cpu_vm_mask & ~(1 << smp_processor_id()); + + if (current->active_mm == mm) { + if (current->mm) + __flush_tlb_one(va); + else + leave_mm(smp_processor_id()); + } + if (cpu_mask) + flush_tlb_others(cpu_mask, mm, va, va + 1); +} + +void flush_tlb_range(struct mm_struct *mm, unsigned long start, unsigned long end) +{ + unsigned long cpu_mask = mm->cpu_vm_mask & ~(1 << smp_processor_id()); + + if (current->active_mm == mm) { + if (current->mm) + __flush_tlb_range(start, end); + else + leave_mm(smp_processor_id()); + } + if (cpu_mask) + flush_tlb_others(cpu_mask, mm, start, end); +} + +static inline void do_flush_tlb_all_local(void) +{ + unsigned long cpu = smp_processor_id(); + + __flush_tlb_all(); + if (cpu_tlbstate[cpu].state == TLBSTATE_LAZY) + leave_mm(cpu); +} + +static void flush_tlb_all_ipi(void* info) +{ + do_flush_tlb_all_local(); +} + +void flush_tlb_all(void) +{ + smp_call_function (flush_tlb_all_ipi,0,1,1); + + do_flush_tlb_all_local(); +} + +/* + * this function sends a 'reschedule' IPI to another CPU. + * it goes straight through and wastes no time serializing + * anything. Worst case is that we lose a reschedule ... + */ + +void smp_send_reschedule(int cpu) +{ + send_IPI_mask(1 << cpu, RESCHEDULE_VECTOR); +} + +/* + * Structure and data for smp_call_function(). This is designed to minimise + * static memory requirements. It also looks cleaner. + */ +static spinlock_t call_lock = SPIN_LOCK_UNLOCKED; + +struct call_data_struct { + void (*func) (void *info); + void *info; + atomic_t started; + atomic_t finished; + int wait; +}; + +static struct call_data_struct * call_data; + +/* + * this function sends a 'generic call function' IPI to all other CPUs + * in the system. + */ + +int smp_call_function (void (*func) (void *info), void *info, int nonatomic, + int wait) +/* + * [SUMMARY] Run a function on all other CPUs. + * The function to run. This must be fast and non-blocking. + * An arbitrary pointer to pass to the function. + * currently unused. + * If true, wait (atomically) until function has completed on other CPUs. + * [RETURNS] 0 on success, else a negative status code. Does not return until + * remote CPUs are nearly ready to execute <> or are or have executed. + * + * You must not call this function with disabled interrupts or from a + * hardware interrupt handler, you may call it from a bottom half handler. + */ +{ + struct call_data_struct data; + int cpus = smp_num_cpus-1; + + if (!cpus) + return 0; + + data.func = func; + data.info = info; + atomic_set(&data.started, 0); + data.wait = wait; + if (wait) + atomic_set(&data.finished, 0); + + spin_lock_bh(&call_lock); + call_data = &data; + /* Send a message to all other CPUs and wait for them to respond */ + send_IPI_allbutself(CALL_FUNCTION_VECTOR); + + /* Wait for response */ + while (atomic_read(&data.started) != cpus) + barrier(); + + if (wait) + while (atomic_read(&data.finished) != cpus) + barrier(); + spin_unlock_bh(&call_lock); + + return 0; +} + +static void stop_this_cpu (void * dummy) +{ + /* + * Remove this CPU: + */ + clear_bit(smp_processor_id(), &cpu_online_map); + __cli(); + disable_local_APIC(); + if (cpu_data[smp_processor_id()].hlt_works_ok) + for(;;) __asm__("hlt"); + for (;;); +} + +/* + * this function calls the 'stop' function on all other CPUs in the system. + */ + +void smp_send_stop(void) +{ + smp_call_function(stop_this_cpu, NULL, 1, 0); + smp_num_cpus = 1; + + __cli(); + disable_local_APIC(); + __sti(); +} + +/* + * Reschedule call back. Nothing to do, + * all the work is done automatically when + * we return from the interrupt. + */ +asmlinkage void smp_reschedule_interrupt(void) +{ + ack_APIC_irq(); +} + +asmlinkage void smp_call_function_interrupt(void) +{ + void (*func) (void *info) = call_data->func; + void *info = call_data->info; + int wait = call_data->wait; + + ack_APIC_irq(); + /* + * Notify initiating CPU that I've grabbed the data and am + * about to execute the function + */ + atomic_inc(&call_data->started); + /* + * At this point the info structure may be out of scope unless wait==1 + */ + (*func)(info); + if (wait) + atomic_inc(&call_data->finished); +} + diff -urpN linux-2.4.9-linus/arch/i386/kernel/smpboot.c linux-2.4.9-larpage/arch/i386/kernel/smpboot.c --- linux-2.4.9-linus/arch/i386/kernel/smpboot.c 2001-02-13 14:13:43.000000000 -0800 +++ linux-2.4.9-larpage/arch/i386/kernel/smpboot.c 2002-11-20 02:02:22.000000000 -0800 @@ -124,7 +124,7 @@ static unsigned long __init setup_trampo */ void __init smp_alloc_memory(void) { - trampoline_base = (void *) alloc_bootmem_low_pages(PAGE_SIZE); + trampoline_base = (void *) alloc_bootmem_low_pages(MMUPAGE_SIZE); /* * Has to be in very low memory so we can execute * real-mode AP code. @@ -577,7 +577,7 @@ static void __init do_boot_cpu (int apic /* So we see what's up */ printk("Booting processor %d/%d eip %lx\n", cpu, apicid, start_eip); - stack_start.esp = (void *) (1024 + PAGE_SIZE + (char *)idle); + stack_start.esp = (void *) (1024 + MMUPAGE_SIZE + (char *)idle); /* * This grunge runs the startup process for diff -urpN linux-2.4.9-linus/arch/i386/kernel/sys_i386.c linux-2.4.9-larpage/arch/i386/kernel/sys_i386.c --- linux-2.4.9-linus/arch/i386/kernel/sys_i386.c 2001-03-19 12:35:09.000000000 -0800 +++ linux-2.4.9-larpage/arch/i386/kernel/sys_i386.c 2002-11-20 02:02:22.000000000 -0800 @@ -97,10 +97,10 @@ asmlinkage int old_mmap(struct mmap_arg_ goto out; err = -EINVAL; - if (a.offset & ~PAGE_MASK) + if (a.offset & ~MMUPAGE_MASK) goto out; - err = do_mmap2(a.addr, a.len, a.prot, a.flags, a.fd, a.offset >> PAGE_SHIFT); + err = do_mmap2(a.addr, a.len, a.prot, a.flags, a.fd, a.offset >> MMUPAGE_SHIFT); out: return err; } diff -urpN linux-2.4.9-linus/arch/i386/kernel/traps.c linux-2.4.9-larpage/arch/i386/kernel/traps.c --- linux-2.4.9-linus/arch/i386/kernel/traps.c 2001-08-12 11:13:59.000000000 -0700 +++ linux-2.4.9-larpage/arch/i386/kernel/traps.c 2002-11-20 02:02:23.000000000 -0800 @@ -41,8 +41,8 @@ #include #include -#ifdef CONFIG_X86_VISWS_APIC #include +#ifdef CONFIG_X86_VISWS_APIC #include #include #endif @@ -160,7 +160,7 @@ void show_trace_task(struct task_struct unsigned long esp = tsk->thread.esp; /* User space on another CPU? */ - if ((esp ^ (unsigned long)tsk) & (PAGE_MASK<<1)) + if ((esp ^ (unsigned long)tsk) & ~(THREAD_SIZE-1)) return; show_trace((unsigned long *)esp); } @@ -791,24 +791,14 @@ asmlinkage void math_emulate(long arg) #endif /* CONFIG_MATH_EMULATION */ -#ifndef CONFIG_M686 void __init trap_init_f00f_bug(void) { - unsigned long page; - pgd_t * pgd; - pmd_t * pmd; - pte_t * pte; - /* - * Allocate a new page in virtual address space, - * move the IDT into it and write protect this page. - */ - page = (unsigned long) vmalloc(PAGE_SIZE); - pgd = pgd_offset(&init_mm, page); - pmd = pmd_offset(pgd, page); - pte = pte_offset(pmd, page); - __free_page(pte_page(*pte)); - *pte = mk_pte_phys(__pa(&idt_table), PAGE_KERNEL_RO); + * Take a new slot in virtual address space, + * duplicate the IDT in it and write protect this entry. + */ + __set_fixmap(FIX_F00F_IDT, __pa(&idt_table), PAGE_KERNEL_RO); + /* * Not that any PGE-capable kernel should have the f00f bug ... */ @@ -819,10 +809,9 @@ void __init trap_init_f00f_bug(void) * variable so that updating idt will automatically * update the idt descriptor.. */ - idt = (struct desc_struct *)page; + idt = (struct desc_struct *)fix_to_virt(FIX_F00F_IDT); __asm__ __volatile__("lidt %0": "=m" (idt_descr)); } -#endif #define _set_gate(gate_addr,type,dpl,addr) \ do { \ diff -urpN linux-2.4.9-linus/arch/i386/kernel/traps.c.orig linux-2.4.9-larpage/arch/i386/kernel/traps.c.orig --- linux-2.4.9-linus/arch/i386/kernel/traps.c.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/arch/i386/kernel/traps.c.orig 2002-11-20 02:02:23.000000000 -0800 @@ -0,0 +1,1025 @@ +/* + * linux/arch/i386/traps.c + * + * Copyright (C) 1991, 1992 Linus Torvalds + * + * Pentium III FXSR, SSE support + * Gareth Hughes , May 2000 + */ + +/* + * 'Traps.c' handles hardware traps and faults after we have saved some + * state in 'asm.s'. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#ifdef CONFIG_MCA +#include +#include +#endif + +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +#include +#ifdef CONFIG_X86_VISWS_APIC +#include +#include +#endif + +#include +#include + +asmlinkage int system_call(void); +asmlinkage void lcall7(void); +asmlinkage void lcall27(void); + +struct desc_struct default_ldt[] = { { 0, 0 }, { 0, 0 }, { 0, 0 }, + { 0, 0 }, { 0, 0 } }; + +/* + * The IDT has to be page-aligned to simplify the Pentium + * F0 0F bug workaround.. We have a special link segment + * for this. + */ +struct desc_struct idt_table[256] __attribute__((__section__(".data.idt"))) = { {0, 0}, }; + +extern void bust_spinlocks(void); + +asmlinkage void divide_error(void); +asmlinkage void debug(void); +asmlinkage void nmi(void); +asmlinkage void int3(void); +asmlinkage void overflow(void); +asmlinkage void bounds(void); +asmlinkage void invalid_op(void); +asmlinkage void device_not_available(void); +asmlinkage void double_fault(void); +asmlinkage void coprocessor_segment_overrun(void); +asmlinkage void invalid_TSS(void); +asmlinkage void segment_not_present(void); +asmlinkage void stack_segment(void); +asmlinkage void general_protection(void); +asmlinkage void page_fault(void); +asmlinkage void coprocessor_error(void); +asmlinkage void simd_coprocessor_error(void); +asmlinkage void alignment_check(void); +asmlinkage void spurious_interrupt_bug(void); +asmlinkage void machine_check(void); + +int kstack_depth_to_print = 24; + + +/* + * If the address is either in the .text section of the + * kernel, or in the vmalloc'ed module regions, it *may* + * be the address of a calling routine + */ + +#ifdef CONFIG_MODULES + +extern struct module *module_list; +extern struct module kernel_module; + +static inline int kernel_text_address(unsigned long addr) +{ + int retval = 0; + struct module *mod; + + if (addr >= (unsigned long) &_stext && + addr <= (unsigned long) &_etext) + return 1; + + for (mod = module_list; mod != &kernel_module; mod = mod->next) { + /* mod_bound tests for addr being inside the vmalloc'ed + * module area. Of course it'd be better to test only + * for the .text subset... */ + if (mod_bound(addr, 0, mod)) { + retval = 1; + break; + } + } + + return retval; +} + +#else + +static inline int kernel_text_address(unsigned long addr) +{ + return (addr >= (unsigned long) &_stext && + addr <= (unsigned long) &_etext); +} + +#endif + +void show_trace(unsigned long * stack) +{ + int i; + unsigned long addr; + + if (!stack) + stack = (unsigned long*)&stack; + + printk("Call Trace: "); + i = 1; + while (((long) stack & (THREAD_SIZE-1)) != 0) { + addr = *stack++; + if (kernel_text_address(addr)) { + if (i && ((i % 6) == 0)) + printk("\n "); + printk("[<%08lx>] ", addr); + i++; + } + } + printk("\n"); +} + +void show_trace_task(struct task_struct *tsk) +{ + unsigned long esp = tsk->thread.esp; + + /* User space on another CPU? */ + if ((esp ^ (unsigned long)tsk) & ~(THREAD_SIZE-1)) + return; + show_trace((unsigned long *)esp); +} + +void show_stack(unsigned long * esp) +{ + unsigned long *stack; + int i; + + // debugging aid: "show_stack(NULL);" prints the + // back trace for this cpu. + + if(esp==NULL) + esp=(unsigned long*)&esp; + + stack = esp; + for(i=0; i < kstack_depth_to_print; i++) { + if (((long) stack & (THREAD_SIZE-1)) == 0) + break; + if (i && ((i % 8) == 0)) + printk("\n "); + printk("%08lx ", *stack++); + } + printk("\n"); + show_trace(esp); +} + +static void show_registers(struct pt_regs *regs) +{ + int i; + int in_kernel = 1; + unsigned long esp; + unsigned short ss; + + esp = (unsigned long) (®s->esp); + ss = __KERNEL_DS; + if (regs->xcs & 3) { + in_kernel = 0; + esp = regs->esp; + ss = regs->xss & 0xffff; + } + printk("CPU: %d\nEIP: %04x:[<%08lx>]\nEFLAGS: %08lx\n", + smp_processor_id(), 0xffff & regs->xcs, regs->eip, regs->eflags); + printk("eax: %08lx ebx: %08lx ecx: %08lx edx: %08lx\n", + regs->eax, regs->ebx, regs->ecx, regs->edx); + printk("esi: %08lx edi: %08lx ebp: %08lx esp: %08lx\n", + regs->esi, regs->edi, regs->ebp, esp); + printk("ds: %04x es: %04x ss: %04x\n", + regs->xds & 0xffff, regs->xes & 0xffff, ss); + printk("Process %s (pid: %d, stackpage=%08lx)", + current->comm, current->pid, 4096+(unsigned long)current); + /* + * When in-kernel, we also print out the stack and code at the + * time of the fault.. + */ + if (in_kernel) { + + printk("\nStack: "); + show_stack((unsigned long*)esp); + + printk("\nCode: "); + if(regs->eip < PAGE_OFFSET) + goto bad; + + for(i=0;i<20;i++) + { + unsigned char c; + if(__get_user(c, &((unsigned char*)regs->eip)[i])) { +bad: + printk(" Bad EIP value."); + break; + } + printk("%02x ", c); + } + } + printk("\n"); +} + +spinlock_t die_lock = SPIN_LOCK_UNLOCKED; + +void die(const char * str, struct pt_regs * regs, long err) +{ + console_verbose(); + spin_lock_irq(&die_lock); + printk("%s: %04lx\n", str, err & 0xffff); + show_registers(regs); + + spin_unlock_irq(&die_lock); + do_exit(SIGSEGV); +} + +static inline void die_if_kernel(const char * str, struct pt_regs * regs, long err) +{ + if (!(regs->eflags & VM_MASK) && !(3 & regs->xcs)) + die(str, regs, err); +} + +static inline unsigned long get_cr2(void) +{ + unsigned long address; + + /* get the address */ + __asm__("movl %%cr2,%0":"=r" (address)); + return address; +} + +static void inline do_trap(int trapnr, int signr, char *str, int vm86, + struct pt_regs * regs, long error_code, siginfo_t *info) +{ + if (vm86 && regs->eflags & VM_MASK) + goto vm86_trap; + if (!(regs->xcs & 3)) + goto kernel_trap; + + trap_signal: { + struct task_struct *tsk = current; + tsk->thread.error_code = error_code; + tsk->thread.trap_no = trapnr; + if (info) + force_sig_info(signr, info, tsk); + else + force_sig(signr, tsk); + return; + } + + kernel_trap: { + unsigned long fixup = search_exception_table(regs->eip); + if (fixup) + regs->eip = fixup; + else + die(str, regs, error_code); + return; + } + + vm86_trap: { + int ret = handle_vm86_trap((struct kernel_vm86_regs *) regs, error_code, trapnr); + if (ret) goto trap_signal; + return; + } +} + +#define DO_ERROR(trapnr, signr, str, name) \ +asmlinkage void do_##name(struct pt_regs * regs, long error_code) \ +{ \ + do_trap(trapnr, signr, str, 0, regs, error_code, NULL); \ +} + +#define DO_ERROR_INFO(trapnr, signr, str, name, sicode, siaddr) \ +asmlinkage void do_##name(struct pt_regs * regs, long error_code) \ +{ \ + siginfo_t info; \ + info.si_signo = signr; \ + info.si_errno = 0; \ + info.si_code = sicode; \ + info.si_addr = (void *)siaddr; \ + do_trap(trapnr, signr, str, 0, regs, error_code, &info); \ +} + +#define DO_VM86_ERROR(trapnr, signr, str, name) \ +asmlinkage void do_##name(struct pt_regs * regs, long error_code) \ +{ \ + do_trap(trapnr, signr, str, 1, regs, error_code, NULL); \ +} + +#define DO_VM86_ERROR_INFO(trapnr, signr, str, name, sicode, siaddr) \ +asmlinkage void do_##name(struct pt_regs * regs, long error_code) \ +{ \ + siginfo_t info; \ + info.si_signo = signr; \ + info.si_errno = 0; \ + info.si_code = sicode; \ + info.si_addr = (void *)siaddr; \ + do_trap(trapnr, signr, str, 1, regs, error_code, &info); \ +} + +DO_VM86_ERROR_INFO( 0, SIGFPE, "divide error", divide_error, FPE_INTDIV, regs->eip) +DO_VM86_ERROR( 3, SIGTRAP, "int3", int3) +DO_VM86_ERROR( 4, SIGSEGV, "overflow", overflow) +DO_VM86_ERROR( 5, SIGSEGV, "bounds", bounds) +DO_ERROR_INFO( 6, SIGILL, "invalid operand", invalid_op, ILL_ILLOPN, regs->eip) +DO_VM86_ERROR( 7, SIGSEGV, "device not available", device_not_available) +DO_ERROR( 8, SIGSEGV, "double fault", double_fault) +DO_ERROR( 9, SIGFPE, "coprocessor segment overrun", coprocessor_segment_overrun) +DO_ERROR(10, SIGSEGV, "invalid TSS", invalid_TSS) +DO_ERROR(11, SIGBUS, "segment not present", segment_not_present) +DO_ERROR(12, SIGBUS, "stack segment", stack_segment) +DO_ERROR_INFO(17, SIGBUS, "alignment check", alignment_check, BUS_ADRALN, get_cr2()) + +asmlinkage void do_general_protection(struct pt_regs * regs, long error_code) +{ + if (regs->eflags & VM_MASK) + goto gp_in_vm86; + + if (!(regs->xcs & 3)) + goto gp_in_kernel; + + current->thread.error_code = error_code; + current->thread.trap_no = 13; + force_sig(SIGSEGV, current); + return; + +gp_in_vm86: + handle_vm86_fault((struct kernel_vm86_regs *) regs, error_code); + return; + +gp_in_kernel: + { + unsigned long fixup; + fixup = search_exception_table(regs->eip); + if (fixup) { + regs->eip = fixup; + return; + } + die("general protection fault", regs, error_code); + } +} + +static void mem_parity_error(unsigned char reason, struct pt_regs * regs) +{ + printk("Uhhuh. NMI received. Dazed and confused, but trying to continue\n"); + printk("You probably have a hardware problem with your RAM chips\n"); + + /* Clear and disable the memory parity error line. */ + reason = (reason & 0xf) | 4; + outb(reason, 0x61); +} + +static void io_check_error(unsigned char reason, struct pt_regs * regs) +{ + unsigned long i; + + printk("NMI: IOCK error (debug interrupt?)\n"); + show_registers(regs); + + /* Re-enable the IOCK line, wait for a few seconds */ + reason = (reason & 0xf) | 8; + outb(reason, 0x61); + i = 2000; + while (--i) udelay(1000); + reason &= ~8; + outb(reason, 0x61); +} + +static void unknown_nmi_error(unsigned char reason, struct pt_regs * regs) +{ +#ifdef CONFIG_MCA + /* Might actually be able to figure out what the guilty party + * is. */ + if( MCA_bus ) { + mca_handle_nmi(); + return; + } +#endif + printk("Uhhuh. NMI received for unknown reason %02x.\n", reason); + printk("Dazed and confused, but trying to continue\n"); + printk("Do you have a strange power saving mode enabled?\n"); +} + +#if CONFIG_X86_IO_APIC + +int nmi_watchdog = 0; + +static int __init setup_nmi_watchdog(char *str) +{ + get_option(&str, &nmi_watchdog); + return 1; +} + +__setup("nmi_watchdog=", setup_nmi_watchdog); + +static spinlock_t nmi_print_lock = SPIN_LOCK_UNLOCKED; + +inline void nmi_watchdog_tick(struct pt_regs * regs) +{ + /* + * the best way to detect wether a CPU has a 'hard lockup' problem + * is to check it's local APIC timer IRQ counts. If they are not + * changing then that CPU has some problem. + * + * as these watchdog NMI IRQs are broadcasted to every CPU, here + * we only have to check the current processor. + * + * since NMIs dont listen to _any_ locks, we have to be extremely + * careful not to rely on unsafe variables. The printk might lock + * up though, so we have to break up console_lock first ... + * [when there will be more tty-related locks, break them up + * here too!] + */ + + static unsigned int last_irq_sums [NR_CPUS], + alert_counter [NR_CPUS]; + + /* + * Since current-> is always on the stack, and we always switch + * the stack NMI-atomically, it's safe to use smp_processor_id(). + */ + int sum, cpu = smp_processor_id(); + + sum = apic_timer_irqs[cpu]; + + if (last_irq_sums[cpu] == sum) { + /* + * Ayiee, looks like this CPU is stuck ... + * wait a few IRQs (5 seconds) before doing the oops ... + */ + alert_counter[cpu]++; + if (alert_counter[cpu] == 5*HZ) { + spin_lock(&nmi_print_lock); + /* + * We are in trouble anyway, lets at least try + * to get a message out. + */ + bust_spinlocks(); + printk("NMI Watchdog detected LOCKUP on CPU%d, registers:\n", cpu); + show_registers(regs); + printk("console shuts up ...\n"); + console_silent(); + spin_unlock(&nmi_print_lock); + do_exit(SIGSEGV); + } + } else { + last_irq_sums[cpu] = sum; + alert_counter[cpu] = 0; + } +} +#endif + +asmlinkage void do_nmi(struct pt_regs * regs, long error_code) +{ + unsigned char reason = inb(0x61); + + + ++nmi_count(smp_processor_id()); + if (!(reason & 0xc0)) { +#if CONFIG_X86_IO_APIC + /* + * Ok, so this is none of the documented NMI sources, + * so it must be the NMI watchdog. + */ + if (nmi_watchdog) { + nmi_watchdog_tick(regs); + return; + } else + unknown_nmi_error(reason, regs); +#else + unknown_nmi_error(reason, regs); +#endif + return; + } + if (reason & 0x80) + mem_parity_error(reason, regs); + if (reason & 0x40) + io_check_error(reason, regs); + /* + * Reassert NMI in case it became active meanwhile + * as it's edge-triggered. + */ + outb(0x8f, 0x70); + inb(0x71); /* dummy */ + outb(0x0f, 0x70); + inb(0x71); /* dummy */ +} + +/* + * Our handling of the processor debug registers is non-trivial. + * We do not clear them on entry and exit from the kernel. Therefore + * it is possible to get a watchpoint trap here from inside the kernel. + * However, the code in ./ptrace.c has ensured that the user can + * only set watchpoints on userspace addresses. Therefore the in-kernel + * watchpoint trap can only occur in code which is reading/writing + * from user space. Such code must not hold kernel locks (since it + * can equally take a page fault), therefore it is safe to call + * force_sig_info even though that claims and releases locks. + * + * Code in ./signal.c ensures that the debug control register + * is restored before we deliver any signal, and therefore that + * user code runs with the correct debug control register even though + * we clear it here. + * + * Being careful here means that we don't have to be as careful in a + * lot of more complicated places (task switching can be a bit lazy + * about restoring all the debug state, and ptrace doesn't have to + * find every occurrence of the TF bit that could be saved away even + * by user code) + */ +asmlinkage void do_debug(struct pt_regs * regs, long error_code) +{ + unsigned int condition; + struct task_struct *tsk = current; + siginfo_t info; + + __asm__ __volatile__("movl %%db6,%0" : "=r" (condition)); + + /* Mask out spurious debug traps due to lazy DR7 setting */ + if (condition & (DR_TRAP0|DR_TRAP1|DR_TRAP2|DR_TRAP3)) { + if (!tsk->thread.debugreg[7]) + goto clear_dr7; + } + + if (regs->eflags & VM_MASK) + goto debug_vm86; + + /* Save debug status register where ptrace can see it */ + tsk->thread.debugreg[6] = condition; + + /* Mask out spurious TF errors due to lazy TF clearing */ + if (condition & DR_STEP) { + /* + * The TF error should be masked out only if the current + * process is not traced and if the TRAP flag has been set + * previously by a tracing process (condition detected by + * the PT_DTRACE flag); remember that the i386 TRAP flag + * can be modified by the process itself in user mode, + * allowing programs to debug themselves without the ptrace() + * interface. + */ + if ((tsk->ptrace & (PT_DTRACE|PT_PTRACED)) == PT_DTRACE) + goto clear_TF; + } + + /* Ok, finally something we can handle */ + tsk->thread.trap_no = 1; + tsk->thread.error_code = error_code; + info.si_signo = SIGTRAP; + info.si_errno = 0; + info.si_code = TRAP_BRKPT; + + /* If this is a kernel mode trap, save the user PC on entry to + * the kernel, that's what the debugger can make sense of. + */ + info.si_addr = ((regs->xcs & 3) == 0) ? (void *)tsk->thread.eip : + (void *)regs->eip; + force_sig_info(SIGTRAP, &info, tsk); + + /* Disable additional traps. They'll be re-enabled when + * the signal is delivered. + */ +clear_dr7: + __asm__("movl %0,%%db7" + : /* no output */ + : "r" (0)); + return; + +debug_vm86: + handle_vm86_trap((struct kernel_vm86_regs *) regs, error_code, 1); + return; + +clear_TF: + regs->eflags &= ~TF_MASK; + return; +} + +/* + * Note that we play around with the 'TS' bit in an attempt to get + * the correct behaviour even in the presence of the asynchronous + * IRQ13 behaviour + */ +void math_error(void *eip) +{ + struct task_struct * task; + siginfo_t info; + unsigned short cwd, swd; + + /* + * Save the info for the exception handler and clear the error. + */ + task = current; + save_init_fpu(task); + task->thread.trap_no = 16; + task->thread.error_code = 0; + info.si_signo = SIGFPE; + info.si_errno = 0; + info.si_code = __SI_FAULT; + info.si_addr = eip; + /* + * (~cwd & swd) will mask out exceptions that are not set to unmasked + * status. 0x3f is the exception bits in these regs, 0x200 is the + * C1 reg you need in case of a stack fault, 0x040 is the stack + * fault bit. We should only be taking one exception at a time, + * so if this combination doesn't produce any single exception, + * then we have a bad program that isn't syncronizing its FPU usage + * and it will suffer the consequences since we won't be able to + * fully reproduce the context of the exception + */ + cwd = get_fpu_cwd(task); + swd = get_fpu_swd(task); + switch (((~cwd) & swd & 0x3f) | (swd & 0x240)) { + case 0x000: + default: + break; + case 0x001: /* Invalid Op */ + case 0x040: /* Stack Fault */ + case 0x240: /* Stack Fault | Direction */ + info.si_code = FPE_FLTINV; + break; + case 0x002: /* Denormalize */ + case 0x010: /* Underflow */ + info.si_code = FPE_FLTUND; + break; + case 0x004: /* Zero Divide */ + info.si_code = FPE_FLTDIV; + break; + case 0x008: /* Overflow */ + info.si_code = FPE_FLTOVF; + break; + case 0x020: /* Precision */ + info.si_code = FPE_FLTRES; + break; + } + force_sig_info(SIGFPE, &info, task); +} + +asmlinkage void do_coprocessor_error(struct pt_regs * regs, long error_code) +{ + ignore_irq13 = 1; + math_error((void *)regs->eip); +} + +void simd_math_error(void *eip) +{ + struct task_struct * task; + siginfo_t info; + unsigned short mxcsr; + + /* + * Save the info for the exception handler and clear the error. + */ + task = current; + save_init_fpu(task); + task->thread.trap_no = 19; + task->thread.error_code = 0; + info.si_signo = SIGFPE; + info.si_errno = 0; + info.si_code = __SI_FAULT; + info.si_addr = eip; + /* + * The SIMD FPU exceptions are handled a little differently, as there + * is only a single status/control register. Thus, to determine which + * unmasked exception was caught we must mask the exception mask bits + * at 0x1f80, and then use these to mask the exception bits at 0x3f. + */ + mxcsr = get_fpu_mxcsr(task); + switch (~((mxcsr & 0x1f80) >> 7) & (mxcsr & 0x3f)) { + case 0x000: + default: + break; + case 0x001: /* Invalid Op */ + info.si_code = FPE_FLTINV; + break; + case 0x002: /* Denormalize */ + case 0x010: /* Underflow */ + info.si_code = FPE_FLTUND; + break; + case 0x004: /* Zero Divide */ + info.si_code = FPE_FLTDIV; + break; + case 0x008: /* Overflow */ + info.si_code = FPE_FLTOVF; + break; + case 0x020: /* Precision */ + info.si_code = FPE_FLTRES; + break; + } + force_sig_info(SIGFPE, &info, task); +} + +asmlinkage void do_simd_coprocessor_error(struct pt_regs * regs, + long error_code) +{ + if (cpu_has_xmm) { + /* Handle SIMD FPU exceptions on PIII+ processors. */ + ignore_irq13 = 1; + simd_math_error((void *)regs->eip); + } else { + /* + * Handle strange cache flush from user space exception + * in all other cases. This is undocumented behaviour. + */ + if (regs->eflags & VM_MASK) { + handle_vm86_fault((struct kernel_vm86_regs *)regs, + error_code); + return; + } + die_if_kernel("cache flush denied", regs, error_code); + current->thread.trap_no = 19; + current->thread.error_code = error_code; + force_sig(SIGSEGV, current); + } +} + +asmlinkage void do_spurious_interrupt_bug(struct pt_regs * regs, + long error_code) +{ +#if 0 + /* No need to warn about this any longer. */ + printk("Ignoring P6 Local APIC Spurious Interrupt Bug...\n"); +#endif +} + +/* + * 'math_state_restore()' saves the current math information in the + * old math state array, and gets the new ones from the current task + * + * Careful.. There are problems with IBM-designed IRQ13 behaviour. + * Don't touch unless you *really* know how it works. + */ +asmlinkage void math_state_restore(struct pt_regs regs) +{ + __asm__ __volatile__("clts"); /* Allow maths ops (or we recurse) */ + + if (current->used_math) { + restore_fpu(current); + } else { + init_fpu(); + } + current->flags |= PF_USEDFPU; /* So we fnsave on switch_to() */ +} + +#ifndef CONFIG_MATH_EMULATION + +asmlinkage void math_emulate(long arg) +{ + printk("math-emulation not enabled and no coprocessor found.\n"); + printk("killing %s.\n",current->comm); + force_sig(SIGFPE,current); + schedule(); +} + +#endif /* CONFIG_MATH_EMULATION */ + +void __init trap_init_f00f_bug(void) +{ + /* + * Take a new slot in virtual address space, + * duplicate the IDT in it and write protect this entry. + */ + __set_fixmap(FIX_F00F_IDT, __pa(&idt_table), PAGE_KERNEL_RO); + + /* + * Not that any PGE-capable kernel should have the f00f bug ... + */ + __flush_tlb_all(); + + /* + * "idt" is magic - it overlaps the idt_descr + * variable so that updating idt will automatically + * update the idt descriptor.. + */ + idt = (struct desc_struct *)page; + __asm__ __volatile__("lidt %0": "=m" (idt_descr)); +} +#endif + +#define _set_gate(gate_addr,type,dpl,addr) \ +do { \ + int __d0, __d1; \ + __asm__ __volatile__ ("movw %%dx,%%ax\n\t" \ + "movw %4,%%dx\n\t" \ + "movl %%eax,%0\n\t" \ + "movl %%edx,%1" \ + :"=m" (*((long *) (gate_addr))), \ + "=m" (*(1+(long *) (gate_addr))), "=&a" (__d0), "=&d" (__d1) \ + :"i" ((short) (0x8000+(dpl<<13)+(type<<8))), \ + "3" ((char *) (addr)),"2" (__KERNEL_CS << 16)); \ +} while (0) + + +/* + * This needs to use 'idt_table' rather than 'idt', and + * thus use the _nonmapped_ version of the IDT, as the + * Pentium F0 0F bugfix can have resulted in the mapped + * IDT being write-protected. + */ +void set_intr_gate(unsigned int n, void *addr) +{ + _set_gate(idt_table+n,14,0,addr); +} + +static void __init set_trap_gate(unsigned int n, void *addr) +{ + _set_gate(idt_table+n,15,0,addr); +} + +static void __init set_system_gate(unsigned int n, void *addr) +{ + _set_gate(idt_table+n,15,3,addr); +} + +static void __init set_call_gate(void *a, void *addr) +{ + _set_gate(a,12,3,addr); +} + +#define _set_seg_desc(gate_addr,type,dpl,base,limit) {\ + *((gate_addr)+1) = ((base) & 0xff000000) | \ + (((base) & 0x00ff0000)>>16) | \ + ((limit) & 0xf0000) | \ + ((dpl)<<13) | \ + (0x00408000) | \ + ((type)<<8); \ + *(gate_addr) = (((base) & 0x0000ffff)<<16) | \ + ((limit) & 0x0ffff); } + +#define _set_tssldt_desc(n,addr,limit,type) \ +__asm__ __volatile__ ("movw %w3,0(%2)\n\t" \ + "movw %%ax,2(%2)\n\t" \ + "rorl $16,%%eax\n\t" \ + "movb %%al,4(%2)\n\t" \ + "movb %4,5(%2)\n\t" \ + "movb $0,6(%2)\n\t" \ + "movb %%ah,7(%2)\n\t" \ + "rorl $16,%%eax" \ + : "=m"(*(n)) : "a" (addr), "r"(n), "ir"(limit), "i"(type)) + +void set_tss_desc(unsigned int n, void *addr) +{ + _set_tssldt_desc(gdt_table+__TSS(n), (int)addr, 235, 0x89); +} + +void set_ldt_desc(unsigned int n, void *addr, unsigned int size) +{ + _set_tssldt_desc(gdt_table+__LDT(n), (int)addr, ((size << 3)-1), 0x82); +} + +#ifdef CONFIG_X86_VISWS_APIC + +/* + * On Rev 005 motherboards legacy device interrupt lines are wired directly + * to Lithium from the 307. But the PROM leaves the interrupt type of each + * 307 logical device set appropriate for the 8259. Later we'll actually use + * the 8259, but for now we have to flip the interrupt types to + * level triggered, active lo as required by Lithium. + */ + +#define REG 0x2e /* The register to read/write */ +#define DEV 0x07 /* Register: Logical device select */ +#define VAL 0x2f /* The value to read/write */ + +static void +superio_outb(int dev, int reg, int val) +{ + outb(DEV, REG); + outb(dev, VAL); + outb(reg, REG); + outb(val, VAL); +} + +static int __attribute__ ((unused)) +superio_inb(int dev, int reg) +{ + outb(DEV, REG); + outb(dev, VAL); + outb(reg, REG); + return inb(VAL); +} + +#define FLOP 3 /* floppy logical device */ +#define PPORT 4 /* parallel logical device */ +#define UART5 5 /* uart2 logical device (not wired up) */ +#define UART6 6 /* uart1 logical device (THIS is the serial port!) */ +#define IDEST 0x70 /* int. destination (which 307 IRQ line) reg. */ +#define ITYPE 0x71 /* interrupt type register */ + +/* interrupt type bits */ +#define LEVEL 0x01 /* bit 0, 0 == edge triggered */ +#define ACTHI 0x02 /* bit 1, 0 == active lo */ + +static void +superio_init(void) +{ + if (visws_board_type == VISWS_320 && visws_board_rev == 5) { + superio_outb(UART6, IDEST, 0); /* 0 means no intr propagated */ + printk("SGI 320 rev 5: disabling 307 uart1 interrupt\n"); + } +} + +static void +lithium_init(void) +{ + set_fixmap(FIX_LI_PCIA, LI_PCI_A_PHYS); + printk("Lithium PCI Bridge A, Bus Number: %d\n", + li_pcia_read16(LI_PCI_BUSNUM) & 0xff); + set_fixmap(FIX_LI_PCIB, LI_PCI_B_PHYS); + printk("Lithium PCI Bridge B (PIIX4), Bus Number: %d\n", + li_pcib_read16(LI_PCI_BUSNUM) & 0xff); + + /* XXX blindly enables all interrupts */ + li_pcia_write16(LI_PCI_INTEN, 0xffff); + li_pcib_write16(LI_PCI_INTEN, 0xffff); +} + +static void +cobalt_init(void) +{ + /* + * On normal SMP PC this is used only with SMP, but we have to + * use it and set it up here to start the Cobalt clock + */ + set_fixmap(FIX_APIC_BASE, APIC_DEFAULT_PHYS_BASE); + printk("Local APIC ID %lx\n", apic_read(APIC_ID)); + printk("Local APIC Version %lx\n", apic_read(APIC_LVR)); + + set_fixmap(FIX_CO_CPU, CO_CPU_PHYS); + printk("Cobalt Revision %lx\n", co_cpu_read(CO_CPU_REV)); + + set_fixmap(FIX_CO_APIC, CO_APIC_PHYS); + printk("Cobalt APIC ID %lx\n", co_apic_read(CO_APIC_ID)); + + /* Enable Cobalt APIC being careful to NOT change the ID! */ + co_apic_write(CO_APIC_ID, co_apic_read(CO_APIC_ID)|CO_APIC_ENABLE); + + printk("Cobalt APIC enabled: ID reg %lx\n", co_apic_read(CO_APIC_ID)); +} +#endif +void __init trap_init(void) +{ +#ifdef CONFIG_EISA + if (isa_readl(0x0FFFD9) == 'E'+('I'<<8)+('S'<<16)+('A'<<24)) + EISA_bus = 1; +#endif + + set_trap_gate(0,÷_error); + set_trap_gate(1,&debug); + set_intr_gate(2,&nmi); + set_system_gate(3,&int3); /* int3-5 can be called from all */ + set_system_gate(4,&overflow); + set_system_gate(5,&bounds); + set_trap_gate(6,&invalid_op); + set_trap_gate(7,&device_not_available); + set_trap_gate(8,&double_fault); + set_trap_gate(9,&coprocessor_segment_overrun); + set_trap_gate(10,&invalid_TSS); + set_trap_gate(11,&segment_not_present); + set_trap_gate(12,&stack_segment); + set_trap_gate(13,&general_protection); + set_intr_gate(14,&page_fault); + set_trap_gate(15,&spurious_interrupt_bug); + set_trap_gate(16,&coprocessor_error); + set_trap_gate(17,&alignment_check); + set_trap_gate(18,&machine_check); + set_trap_gate(19,&simd_coprocessor_error); + + set_system_gate(SYSCALL_VECTOR,&system_call); + + /* + * default LDT is a single-entry callgate to lcall7 for iBCS + * and a callgate to lcall27 for Solaris/x86 binaries + */ + set_call_gate(&default_ldt[0],lcall7); + set_call_gate(&default_ldt[4],lcall27); + + /* + * Should be a barrier for any external CPU state. + */ + cpu_init(); + +#ifdef CONFIG_X86_VISWS_APIC + superio_init(); + lithium_init(); + cobalt_init(); +#endif +} diff -urpN linux-2.4.9-linus/arch/i386/mm/fault.c linux-2.4.9-larpage/arch/i386/mm/fault.c --- linux-2.4.9-linus/arch/i386/mm/fault.c 2001-05-15 00:16:51.000000000 -0700 +++ linux-2.4.9-larpage/arch/i386/mm/fault.c 2002-11-20 02:02:24.000000000 -0800 @@ -46,9 +46,9 @@ good_area: if (!(vma->vm_flags & VM_WRITE)) goto bad_area; size--; - size += start & ~PAGE_MASK; - size >>= PAGE_SHIFT; - start &= PAGE_MASK; + size += start & ~MMUPAGE_MASK; + size >>= MMUPAGE_SHIFT; + start &= MMUPAGE_MASK; for (;;) { if (handle_mm_fault(current->mm, vma, start, 1) <= 0) @@ -56,7 +56,7 @@ good_area: if (!size) break; size--; - start += PAGE_SIZE; + start += MMUPAGE_SIZE; if (start < vma->vm_end) continue; vma = vma->vm_next; @@ -109,7 +109,6 @@ asmlinkage void do_page_fault(struct pt_ struct mm_struct *mm; struct vm_area_struct * vma; unsigned long address; - unsigned long page; unsigned long fixup; int write; siginfo_t info; @@ -218,7 +217,7 @@ good_area: * Did it hit the DOS screen memory VA from vm86 mode? */ if (regs->eflags & VM_MASK) { - unsigned long bit = (address - 0xA0000) >> PAGE_SHIFT; + unsigned long bit = (address - 0xA0000) >> MMUPAGE_SHIFT; if (bit < 32) tsk->thread.screen_bitmap |= 1 << bit; } @@ -273,22 +272,29 @@ no_context: bust_spinlocks(); - if (address < PAGE_SIZE) + if (address < MMUPAGE_SIZE) printk(KERN_ALERT "Unable to handle kernel NULL pointer dereference"); else printk(KERN_ALERT "Unable to handle kernel paging request"); printk(" at virtual address %08lx\n",address); printk(" printing eip:\n"); printk("%08lx\n", regs->eip); + +#ifndef CONFIG_X86_PAE +{ + unsigned long page; asm("movl %%cr3,%0":"=r" (page)); page = ((unsigned long *) __va(page))[address >> 22]; printk(KERN_ALERT "*pde = %08lx\n", page); if (page & 1) { - page &= PAGE_MASK; + page &= MMUPAGE_MASK; address &= 0x003ff000; - page = ((unsigned long *) __va(page))[address >> PAGE_SHIFT]; + page = ((unsigned long *) __va(page))[address >> MMUPAGE_SHIFT]; printk(KERN_ALERT "*pte = %08lx\n", page); } +} +#endif /* !CONFIG_X86_PAE */ + die("Oops", regs, error_code); do_exit(SIGKILL); diff -urpN linux-2.4.9-linus/arch/i386/mm/fault.c.orig linux-2.4.9-larpage/arch/i386/mm/fault.c.orig --- linux-2.4.9-linus/arch/i386/mm/fault.c.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/arch/i386/mm/fault.c.orig 2002-11-20 02:02:24.000000000 -0800 @@ -0,0 +1,359 @@ +/* + * linux/arch/i386/mm/fault.c + * + * Copyright (C) 1995 Linus Torvalds + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +extern void die(const char *,struct pt_regs *,long); + +/* + * Ugly, ugly, but the goto's result in better assembly.. + */ +int __verify_write(const void * addr, unsigned long size) +{ + struct vm_area_struct * vma; + unsigned long start = (unsigned long) addr; + + if (!size) + return 1; + + vma = find_vma(current->mm, start); + if (!vma) + goto bad_area; + if (vma->vm_start > start) + goto check_stack; + +good_area: + if (!(vma->vm_flags & VM_WRITE)) + goto bad_area; + size--; + size += start & ~MMUPAGE_MASK; + size >>= MMUPAGE_SHIFT; + start &= MMUPAGE_MASK; + + for (;;) { + if (handle_mm_fault(current->mm, vma, start, 1) <= 0) + goto bad_area; + if (!size) + break; + size--; + start += MMUPAGE_SIZE; + if (start < vma->vm_end) + continue; + vma = vma->vm_next; + if (!vma || vma->vm_start != start) + goto bad_area; + if (!(vma->vm_flags & VM_WRITE)) + goto bad_area;; + } + return 1; + +check_stack: + if (!(vma->vm_flags & VM_GROWSDOWN)) + goto bad_area; + if (expand_stack(vma, start) == 0) + goto good_area; + +bad_area: + return 0; +} + +extern spinlock_t console_lock, timerlist_lock; + +/* + * Unlock any spinlocks which will prevent us from getting the + * message out (timerlist_lock is acquired through the + * console unblank code) + */ +void bust_spinlocks(void) +{ + spin_lock_init(&console_lock); + spin_lock_init(&timerlist_lock); +} + +asmlinkage void do_invalid_op(struct pt_regs *, unsigned long); +extern unsigned long idt; + +/* + * This routine handles page faults. It determines the address, + * and the problem, and then passes it off to one of the appropriate + * routines. + * + * error_code: + * bit 0 == 0 means no page found, 1 means protection fault + * bit 1 == 0 means read, 1 means write + * bit 2 == 0 means kernel, 1 means user-mode + */ +asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long error_code) +{ + struct task_struct *tsk; + struct mm_struct *mm; + struct vm_area_struct * vma; + unsigned long address; + unsigned long fixup; + int write; + siginfo_t info; + + /* get the address */ + __asm__("movl %%cr2,%0":"=r" (address)); + + /* It's safe to allow irq's after cr2 has been saved */ + if (regs->eflags & X86_EFLAGS_IF) + local_irq_enable(); + + tsk = current; + + /* + * We fault-in kernel-space virtual memory on-demand. The + * 'reference' page table is init_mm.pgd. + * + * NOTE! We MUST NOT take any locks for this case. We may + * be in an interrupt or a critical region, and should + * only copy the information from the master page table, + * nothing more. + * + * This verifies that the fault happens in kernel space + * (error_code & 4) == 0, and that the fault was not a + * protection error (error_code & 1) == 0. + */ + if (address >= TASK_SIZE && !(error_code & 5)) + goto vmalloc_fault; + + mm = tsk->mm; + info.si_code = SEGV_MAPERR; + + /* + * If we're in an interrupt or have no user + * context, we must not take the fault.. + */ + if (in_interrupt() || !mm) + goto no_context; + + down_read(&mm->mmap_sem); + + vma = find_vma(mm, address); + if (!vma) + goto bad_area; + if (vma->vm_start <= address) + goto good_area; + if (!(vma->vm_flags & VM_GROWSDOWN)) + goto bad_area; + if (error_code & 4) { + /* + * accessing the stack below %esp is always a bug. + * The "+ 32" is there due to some instructions (like + * pusha) doing post-decrement on the stack and that + * doesn't show up until later.. + */ + if (address + 32 < regs->esp) + goto bad_area; + } + if (expand_stack(vma, address)) + goto bad_area; +/* + * Ok, we have a good vm_area for this memory access, so + * we can handle it.. + */ +good_area: + info.si_code = SEGV_ACCERR; + write = 0; + switch (error_code & 3) { + default: /* 3: write, present */ +#ifdef TEST_VERIFY_AREA + if (regs->cs == KERNEL_CS) + printk("WP fault at %08lx\n", regs->eip); +#endif + /* fall through */ + case 2: /* write, not present */ + if (!(vma->vm_flags & VM_WRITE)) + goto bad_area; + write++; + break; + case 1: /* read, present */ + goto bad_area; + case 0: /* read, not present */ + if (!(vma->vm_flags & (VM_READ | VM_EXEC))) + goto bad_area; + } + + /* + * If for any reason at all we couldn't handle the fault, + * make sure we exit gracefully rather than endlessly redo + * the fault. + */ + switch (handle_mm_fault(mm, vma, address, write)) { + case 1: + tsk->min_flt++; + break; + case 2: + tsk->maj_flt++; + break; + case 0: + goto do_sigbus; + default: + goto out_of_memory; + } + + /* + * Did it hit the DOS screen memory VA from vm86 mode? + */ + if (regs->eflags & VM_MASK) { + unsigned long bit = (address - 0xA0000) >> MMUPAGE_SHIFT; + if (bit < 32) + tsk->thread.screen_bitmap |= 1 << bit; + } + up_read(&mm->mmap_sem); + return; + +/* + * Something tried to access memory that isn't in our memory map.. + * Fix it, but check if it's kernel or user first.. + */ +bad_area: + up_read(&mm->mmap_sem); + + /* User mode accesses just cause a SIGSEGV */ + if (error_code & 4) { + tsk->thread.cr2 = address; + tsk->thread.error_code = error_code; + tsk->thread.trap_no = 14; + info.si_signo = SIGSEGV; + info.si_errno = 0; + /* info.si_code has been set above */ + info.si_addr = (void *)address; + force_sig_info(SIGSEGV, &info, tsk); + return; + } + + /* + * Pentium F0 0F C7 C8 bug workaround. + */ + if (boot_cpu_data.f00f_bug) { + unsigned long nr; + + nr = (address - idt) >> 3; + + if (nr == 6) { + do_invalid_op(regs, 0); + return; + } + } + +no_context: + /* Are we prepared to handle this kernel fault? */ + if ((fixup = search_exception_table(regs->eip)) != 0) { + regs->eip = fixup; + return; + } + +/* + * Oops. The kernel tried to access some bad page. We'll have to + * terminate things with extreme prejudice. + */ + + bust_spinlocks(); + + if (address < PAGE_SIZE) + printk(KERN_ALERT "Unable to handle kernel NULL pointer dereference"); + else + printk(KERN_ALERT "Unable to handle kernel paging request"); + printk(" at virtual address %08lx\n",address); + printk(" printing eip:\n"); + printk("%08lx\n", regs->eip); + asm("movl %%cr3,%0":"=r" (page)); + page = ((unsigned long *) __va(page))[address >> 22]; + printk(KERN_ALERT "*pde = %08lx\n", page); + if (page & 1) { + page &= PAGE_MASK; + address &= 0x003ff000; + page = ((unsigned long *) __va(page))[address >> PAGE_SHIFT]; + printk(KERN_ALERT "*pte = %08lx\n", page); + } + die("Oops", regs, error_code); + do_exit(SIGKILL); + +/* + * We ran out of memory, or some other thing happened to us that made + * us unable to handle the page fault gracefully. + */ +out_of_memory: + up_read(&mm->mmap_sem); + printk("VM: killing process %s\n", tsk->comm); + if (error_code & 4) + do_exit(SIGKILL); + goto no_context; + +do_sigbus: + up_read(&mm->mmap_sem); + + /* + * Send a sigbus, regardless of whether we were in kernel + * or user mode. + */ + tsk->thread.cr2 = address; + tsk->thread.error_code = error_code; + tsk->thread.trap_no = 14; + info.si_code = SIGBUS; + info.si_errno = 0; + info.si_code = BUS_ADRERR; + info.si_addr = (void *)address; + force_sig_info(SIGBUS, &info, tsk); + + /* Kernel mode? Handle exceptions or die */ + if (!(error_code & 4)) + goto no_context; + return; + +vmalloc_fault: + { + /* + * Synchronize this task's top level page-table + * with the 'reference' page table. + * + * Do _not_ use "tsk" here. We might be inside + * an interrupt in the middle of a task switch.. + */ + int offset = __pgd_offset(address); + pgd_t *pgd, *pgd_k; + pmd_t *pmd, *pmd_k; + pte_t *pte_k; + + asm("movl %%cr3,%0":"=r" (pgd)); + pgd = offset + (pgd_t *)__va(pgd); + pgd_k = init_mm.pgd + offset; + + if (!pgd_present(*pgd_k)) + goto no_context; + set_pgd(pgd, *pgd_k); + + pmd = pmd_offset(pgd, address); + pmd_k = pmd_offset(pgd_k, address); + if (!pmd_present(*pmd_k)) + goto no_context; + set_pmd(pmd, *pmd_k); + + pte_k = pte_offset(pmd_k, address); + if (!pte_present(*pte_k)) + goto no_context; + return; + } +} diff -urpN linux-2.4.9-linus/arch/i386/mm/init.c linux-2.4.9-larpage/arch/i386/mm/init.c --- linux-2.4.9-linus/arch/i386/mm/init.c 2001-04-20 16:15:20.000000000 -0700 +++ linux-2.4.9-larpage/arch/i386/mm/init.c 2002-11-20 02:02:25.000000000 -0800 @@ -36,9 +36,15 @@ #include #include -unsigned long highstart_pfn, highend_pfn; static unsigned long totalram_pages; static unsigned long totalhigh_pages; +struct page *zero_page; + +#ifdef CONFIG_HIGHMEM +unsigned long highend_pfn; +pgprot_t kmap_prot; +pte_t *kmap_pte; +#endif int do_check_pgt_cache(int low, int high) { @@ -62,58 +68,38 @@ int do_check_pgt_cache(int low, int high return freed; } -/* - * NOTE: pagetable_init alloc all the fixmap pagetables contiguous on the - * physical space so we can cache the place of the first one and move - * around without checking the pgd every time. - */ - -#if CONFIG_HIGHMEM -pte_t *kmap_pte; -pgprot_t kmap_prot; - -#define kmap_get_fixmap_pte(vaddr) \ - pte_offset(pmd_offset(pgd_offset_k(vaddr), (vaddr)), (vaddr)) - -void __init kmap_init(void) -{ - unsigned long kmap_vstart; - - /* cache the first kmap pte */ - kmap_vstart = __fix_to_virt(FIX_KMAP_BEGIN); - kmap_pte = kmap_get_fixmap_pte(kmap_vstart); - - kmap_prot = PAGE_KERNEL; -} -#endif /* CONFIG_HIGHMEM */ - void show_mem(void) { - int i, total = 0, reserved = 0; - int shared = 0, cached = 0; - int highmem = 0; - - printk("Mem-info:\n"); +#define PAGE_KB(nr_pages) ((nr_pages)<<(PAGE_SHIFT-10)) + struct page *page; + unsigned int reserved = 0, swcached = 0; + unsigned int slabbed = 0, sharing = 0; + int count; + + printk("Total memory: %8lukB Total DMAmem: %8lukB\n", + PAGE_KB(max_mapnr), + PAGE_KB(contig_page_data.node_zones[ZONE_DMA].size)); + printk("Total normal: %8lukB Total highmem: %8lukB\n", + PAGE_KB(contig_page_data.node_zones[ZONE_NORMAL].size), + PAGE_KB(contig_page_data.node_zones[ZONE_HIGHMEM].size)); show_free_areas(); - printk("Free swap: %6dkB\n",nr_swap_pages<<(PAGE_SHIFT-10)); - i = max_mapnr; - while (i-- > 0) { - total++; - if (PageHighMem(mem_map+i)) - highmem++; - if (PageReserved(mem_map+i)) + + for (page = mem_map + max_mapnr; --page >= mem_map; ) { + if (PageReserved(page)) reserved++; - else if (PageSwapCache(mem_map+i)) - cached++; - else if (page_count(mem_map+i)) - shared += page_count(mem_map+i) - 1; - } - printk("%d pages of RAM\n", total); - printk("%d pages of HIGHMEM\n",highmem); - printk("%d reserved pages\n",reserved); - printk("%d pages shared\n",shared); - printk("%d pages swap cached\n",cached); - printk("%ld pages in page table cache\n",pgtable_cache_size); + else if (PageSwapCache(page)) + swcached++; + else if (PageSlab(page)) + slabbed++; + else if ((count = page_count(page)) > PAGE_MMUCOUNT) + sharing += (count - 1) >> PAGE_MMUSHIFT; + } + printk("Reserved mem: %8ukB Page sharing: %8ukB\n", + PAGE_KB(reserved), PAGE_KB(sharing)); + printk("Swap cache: %8ukB Free swap: %8ukB\n", + PAGE_KB(swcached), PAGE_KB(nr_swap_pages)); + printk("Page cache: %8ukB Slab cache: %8ukB\n", + PAGE_KB(atomic_read(&page_cache_size)), PAGE_KB(slabbed)); show_buffers(); } @@ -164,141 +150,92 @@ void __set_fixmap (enum fixed_addresses set_pte_phys(address, phys, flags); } -static void __init fixrange_init (unsigned long start, unsigned long end, pgd_t *pgd_base) -{ - pgd_t *pgd; - pmd_t *pmd; - pte_t *pte; - int i, j; - unsigned long vaddr; - - vaddr = start; - i = __pgd_offset(vaddr); - j = __pmd_offset(vaddr); - pgd = pgd_base + i; - - for ( ; (i < PTRS_PER_PGD) && (vaddr != end); pgd++, i++) { -#if CONFIG_X86_PAE - if (pgd_none(*pgd)) { - pmd = (pmd_t *) alloc_bootmem_low_pages(PAGE_SIZE); - set_pgd(pgd, __pgd(__pa(pmd) + 0x1)); - if (pmd != pmd_offset(pgd, 0)) - printk("PAE BUG #02!\n"); - } - pmd = pmd_offset(pgd, vaddr); -#else - pmd = (pmd_t *)pgd; -#endif - for (; (j < PTRS_PER_PMD) && (vaddr != end); pmd++, j++) { - if (pmd_none(*pmd)) { - pte = (pte_t *) alloc_bootmem_low_pages(PAGE_SIZE); - set_pmd(pmd, __pmd(_KERNPG_TABLE + __pa(pte))); - if (pte != pte_offset(pmd, 0)) - BUG(); - } - vaddr += PMD_SIZE; - } - j = 0; - } -} - static void __init pagetable_init (void) { - unsigned long vaddr, end; - pgd_t *pgd, *pgd_base; - int i, j, k; + unsigned long entry, end; pmd_t *pmd; - pte_t *pte, *pte_base; + pte_t *pte; + int i; /* - * This can be zero as well - no problem, in that case we exit - * the loops anyway due to the PTRS_PER_* conditions. + * It's generally assumed that the user/kernel boundary + * coincides with a pgd entry boundary (4MB, or 1GB if PAE): + * the restriction could be lifted but why bother? just check. */ - end = (unsigned long)__va(max_low_pfn*PAGE_SIZE); - - pgd_base = swapper_pg_dir; -#if CONFIG_X86_PAE - for (i = 0; i < PTRS_PER_PGD; i++) - set_pgd(pgd_base + i, __pgd(1 + __pa(empty_zero_page))); +#if (__PAGE_OFFSET & (PGDIR_SIZE - 1)) +#error PAGE_OFFSET should be a multiple of PGDIR_SIZE! #endif - i = __pgd_offset(PAGE_OFFSET); - pgd = pgd_base + i; - for (; i < PTRS_PER_PGD; pgd++, i++) { - vaddr = i*PGDIR_SIZE; - if (end && (vaddr >= end)) - break; #if CONFIG_X86_PAE - pmd = (pmd_t *) alloc_bootmem_low_pages(PAGE_SIZE); - set_pgd(pgd, __pgd(__pa(pmd) + 0x1)); -#else - pmd = (pmd_t *)pgd; + /* + * Usually only one page is needed here: if PAGE_OFFSET lowered, + * maybe three pages: need not be contiguous, but might as well. + */ + pmd = (pmd_t *)alloc_bootmem_low_pages(KERNEL_PGD_PTRS*MMUPAGE_SIZE); + for (i = 1; i < USER_PGD_PTRS; i++) + set_pgd(swapper_pg_dir+i, __pgd(1 + __pa(empty_zero_page))); + for (; i < PTRS_PER_PGD; i++, pmd += PTRS_PER_PMD) + set_pgd(swapper_pg_dir+i, __pgd(1 + __pa(pmd))); + /* + * Add low memory identity-mappings - SMP needs it when + * starting up on an AP from real-mode. In the non-PAE + * case we already have these mappings through head.S. + * All user-space mappings are explicitly cleared after + * SMP startup. + */ + swapper_pg_dir[0] = swapper_pg_dir[USER_PGD_PTRS]; #endif - if (pmd != pmd_offset(pgd, 0)) - BUG(); - for (j = 0; j < PTRS_PER_PMD; pmd++, j++) { - vaddr = i*PGDIR_SIZE + j*PMD_SIZE; - if (end && (vaddr >= end)) - break; - if (cpu_has_pse) { - unsigned long __pe; - - set_in_cr4(X86_CR4_PSE); - boot_cpu_data.wp_works_ok = 1; - __pe = _KERNPG_TABLE + _PAGE_PSE + __pa(vaddr); - /* Make it "global" too if supported */ - if (cpu_has_pge) { - set_in_cr4(X86_CR4_PGE); - __pe += _PAGE_GLOBAL; - } - set_pmd(pmd, __pmd(__pe)); - continue; - } - - pte_base = pte = (pte_t *) alloc_bootmem_low_pages(PAGE_SIZE); - for (k = 0; k < PTRS_PER_PTE; pte++, k++) { - vaddr = i*PGDIR_SIZE + j*PMD_SIZE + k*PAGE_SIZE; - if (end && (vaddr >= end)) - break; - *pte = mk_pte_phys(__pa(vaddr), PAGE_KERNEL); - } - set_pmd(pmd, __pmd(_KERNPG_TABLE + __pa(pte_base))); - if (pte_base != pte_offset(pmd, 0)) - BUG(); + /* + * Map in all the low memory pages: using PSE if available, + * or by allocating and populating page tables if no PSE. + */ + pmd = pmd_offset(pgd_offset_k(PAGE_OFFSET), PAGE_OFFSET); + end = max_low_pfn << MMUPAGE_SHIFT; + if (cpu_has_pse) { + set_in_cr4(X86_CR4_PSE); + boot_cpu_data.wp_works_ok = 1; + entry = _KERNPG_TABLE | _PAGE_PSE; + /* Make it "global" too if supported */ + if (cpu_has_pge) { + entry |= _PAGE_GLOBAL; + set_in_cr4(X86_CR4_PGE); } + for (; entry < end; pmd++, entry += PTRS_PER_PTE*MMUPAGE_SIZE) + set_pmd(pmd, __pmd(entry)); + } + else for (entry = 0; entry < end; pmd++) { + pte = (pte_t *)alloc_bootmem_low_pages(MMUPAGE_SIZE); + for (i = 0; i < PTRS_PER_PTE; i++, pte++, entry += MMUPAGE_SIZE) { + if (entry >= end) + break; + *pte = mk_pte_phys(entry, PAGE_KERNEL); + } + set_pmd(pmd, __pmd(_KERNPG_TABLE + __pa(pte - i))); } /* - * Fixed mappings, only the page table structure has to be - * created - mappings will be set by set_fixmap(): - */ - vaddr = __fix_to_virt(__end_of_fixed_addresses - 1) & PMD_MASK; - fixrange_init(vaddr, 0, pgd_base); - -#if CONFIG_HIGHMEM - /* - * Permanent kmaps: + * Leave vmalloc() to create its own page tables as needed, + * but create the page tables at top of virtual memory, to be + * populated by kmap_atomic(), kmap_high() and set_fixmap(). + * kmap_high() assumes pkmap_page_table contiguous throughout. */ - vaddr = PKMAP_BASE; - fixrange_init(vaddr, vaddr + PAGE_SIZE*LAST_PKMAP, pgd_base); + pmd = pmd_offset(pgd_offset_k(VMALLOC_END), VMALLOC_END); + i = (0UL - (VMALLOC_END & PMD_MASK)) >> PMD_SHIFT; + pte = (pte_t *)alloc_bootmem_low_pages(i*MMUPAGE_SIZE); + for (; --i >= 0; pmd++, pte += PTRS_PER_PTE) + set_pmd(pmd, __pmd(_KERNPG_TABLE + __pa(pte))); - pgd = swapper_pg_dir + __pgd_offset(vaddr); - pmd = pmd_offset(pgd, vaddr); - pte = pte_offset(pmd, vaddr); - pkmap_page_table = pte; -#endif - -#if CONFIG_X86_PAE +#ifdef CONFIG_HIGHMEM /* - * Add low memory identity-mappings - SMP needs it when - * starting up on an AP from real-mode. In the non-PAE - * case we already have these mappings through head.S. - * All user-space mappings are explicitly cleared after - * SMP startup. + * For asm/highmem.h kmap_atomic() and mm/highmem.c kmap_high(). */ - pgd_base[0] = pgd_base[USER_PTRS_PER_PGD]; + kmap_prot = PAGE_KERNEL; + kmap_pte = pte_offset(pmd_offset(pgd_offset_k( + KMAP_BASE), KMAP_BASE), KMAP_BASE); + pkmap_page_table = kmap_pte + + ((PKMAP_BASE - KMAP_BASE) >> MMUPAGE_SHIFT); #endif } @@ -344,24 +281,20 @@ void __init paging_init(void) __flush_tlb_all(); -#ifdef CONFIG_HIGHMEM - kmap_init(); -#endif { unsigned long zones_size[MAX_NR_ZONES] = {0, 0, 0}; - unsigned int max_dma, high, low; + unsigned int max_dma_pfn; - max_dma = virt_to_phys((char *)MAX_DMA_ADDRESS) >> PAGE_SHIFT; - low = max_low_pfn; - high = highend_pfn; - - if (low < max_dma) - zones_size[ZONE_DMA] = low; + max_dma_pfn = (MAX_DMA_ADDRESS - PAGE_OFFSET) >> MMUPAGE_SHIFT; + if (max_low_pfn < max_dma_pfn) + zones_size[ZONE_DMA] = max_low_pfn >> PAGE_MMUSHIFT; else { - zones_size[ZONE_DMA] = max_dma; - zones_size[ZONE_NORMAL] = low - max_dma; + zones_size[ZONE_DMA] = max_dma_pfn >> PAGE_MMUSHIFT; + zones_size[ZONE_NORMAL] = + (max_low_pfn - max_dma_pfn) >> PAGE_MMUSHIFT; #ifdef CONFIG_HIGHMEM - zones_size[ZONE_HIGHMEM] = high - low; + zones_size[ZONE_HIGHMEM] = + (highend_pfn - max_low_pfn) >> PAGE_MMUSHIFT; #endif } free_area_init(zones_size); @@ -440,34 +373,50 @@ static inline int page_is_ram (unsigned void __init mem_init(void) { int codesize, reservedpages, datasize, initsize; - int tmp; + int tmp, top; if (!mem_map) BUG(); #ifdef CONFIG_HIGHMEM - highmem_start_page = mem_map + highstart_pfn; - max_mapnr = num_physpages = highend_pfn; + max_mapnr = num_physpages = (highend_pfn >> PAGE_MMUSHIFT); #else - max_mapnr = num_physpages = max_low_pfn; + max_mapnr = num_physpages = (max_low_pfn >> PAGE_MMUSHIFT); #endif - high_memory = (void *) __va(max_low_pfn * PAGE_SIZE); + highmem_start_page = mem_map + (max_low_pfn >> PAGE_MMUSHIFT); + high_memory = (void *) __va(max_low_pfn << MMUPAGE_SHIFT); /* clear the zero-page */ - memset(empty_zero_page, 0, PAGE_SIZE); + memset(empty_zero_page, 0, MMUPAGE_SIZE); /* this will put all low memory onto the freelists */ totalram_pages += free_all_bootmem(); + if (MMUPAGE_SHIFT) { + /* + * For most purposes, the mmupage empty_zero_page + * would be enough; but if a kiobuf is to be useful + * on whole pages, we need a whole page zeroed. + */ + zero_page = virt_to_page(get_zeroed_page(GFP_ATOMIC)); + SetPageReserved(zero_page); + totalram_pages--; + } + else + zero_page = virt_to_page(empty_zero_page); + reservedpages = 0; - for (tmp = 0; tmp < max_low_pfn; tmp++) + top = max_low_pfn >> PAGE_MMUSHIFT; + for (tmp = 0; tmp < top; tmp++) { /* * Only count reserved RAM pages */ - if (page_is_ram(tmp) && PageReserved(mem_map+tmp)) + if (PageReserved(mem_map+tmp) && page_is_ram(tmp)) reservedpages++; + } #ifdef CONFIG_HIGHMEM - for (tmp = highstart_pfn; tmp < highend_pfn; tmp++) { + top = highend_pfn >> PAGE_MMUSHIFT; + for (tmp = max_low_pfn >> PAGE_MMUSHIFT; tmp < top; tmp++) { struct page *page = mem_map + tmp; if (!page_is_ram(tmp)) { @@ -476,7 +425,7 @@ void __init mem_init(void) } ClearPageReserved(page); set_bit(PG_highmem, &page->flags); - atomic_set(&page->count, 1); + set_page_count(page, 1); __free_page(page); totalhigh_pages++; } @@ -539,31 +488,35 @@ static int do_test_wp_bit(unsigned long return flag; } -void free_initmem(void) +static unsigned long free_memk(unsigned long start, unsigned long end) { unsigned long addr; - addr = (unsigned long)(&__init_begin); - for (; addr < (unsigned long)(&__init_end); addr += PAGE_SIZE) { + start = PAGE_ALIGN(start); + end &= PAGE_MASK; + for (addr = start; addr < end; addr += PAGE_SIZE) { ClearPageReserved(virt_to_page(addr)); set_page_count(virt_to_page(addr), 1); free_page(addr); totalram_pages++; } - printk ("Freeing unused kernel memory: %dk freed\n", (&__init_end - &__init_begin) >> 10); + /* return kBytes freed */ + return (end - start) >> 10; +} + +void free_initmem(void) +{ + unsigned long freed = free_memk((unsigned long)(&__init_begin), + (unsigned long)(&__init_end)); + printk ("Freeing unused kernel memory: %luk freed\n", freed); } #ifdef CONFIG_BLK_DEV_INITRD void free_initrd_mem(unsigned long start, unsigned long end) { - if (start < end) - printk ("Freeing initrd memory: %ldk freed\n", (end - start) >> 10); - for (; start < end; start += PAGE_SIZE) { - ClearPageReserved(virt_to_page(start)); - set_page_count(virt_to_page(start), 1); - free_page(start); - totalram_pages++; - } + unsigned long freed = free_memk(start, end); + if (freed) + printk ("Freeing initrd memory: %luk freed\n", freed); } #endif diff -urpN linux-2.4.9-linus/arch/i386/mm/init.c.orig linux-2.4.9-larpage/arch/i386/mm/init.c.orig --- linux-2.4.9-linus/arch/i386/mm/init.c.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/arch/i386/mm/init.c.orig 2002-11-20 02:02:25.000000000 -0800 @@ -0,0 +1,529 @@ +/* + * linux/arch/i386/mm/init.c + * + * Copyright (C) 1995 Linus Torvalds + * + * Support of BIGMEM added by Gerhard Wichert, Siemens AG, July 1999 + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#ifdef CONFIG_BLK_DEV_INITRD +#include +#endif +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static unsigned long totalram_pages; +static unsigned long totalhigh_pages; +struct page *zero_page; + +#ifdef CONFIG_HIGHMEM +unsigned long highend_pfn; +pgprot_t kmap_prot; +pte_t *kmap_pte; +#endif + +int do_check_pgt_cache(int low, int high) +{ + int freed = 0; + if(pgtable_cache_size > high) { + do { + if (pgd_quicklist) { + free_pgd_slow(get_pgd_fast()); + freed++; + } + if (pmd_quicklist) { + pmd_free_slow(pmd_alloc_one_fast(NULL, 0)); + freed++; + } + if (pte_quicklist) { + pte_free_slow(pte_alloc_one_fast(NULL, 0)); + freed++; + } + } while(pgtable_cache_size > low); + } + return freed; +} + +void show_mem(void) +{ +#define PAGE_KB(nr_pages) ((nr_pages)<<(PAGE_SHIFT-10)) + struct page *page; + unsigned int reserved = 0, swcached = 0; + unsigned int slabbed = 0, sharing = 0; + int count; + + printk("Total memory: %8lukB Total DMAmem: %8lukB\n", + PAGE_KB(max_mapnr), + PAGE_KB(contig_page_data.node_zones[ZONE_DMA].size)); + printk("Total normal: %8lukB Total highmem: %8lukB\n", + PAGE_KB(contig_page_data.node_zones[ZONE_NORMAL].size), + PAGE_KB(contig_page_data.node_zones[ZONE_HIGHMEM].size)); + show_free_areas(); + + for (page = mem_map + max_mapnr; --page >= mem_map; ) { + if (PageReserved(page)) + reserved++; + else if (PageSwapCache(page)) + swcached++; + else if (PageSlab(page)) + slabbed++; + else if ((count = page_count(page)) > PAGE_MMUCOUNT) + sharing += (count - 1) >> PAGE_MMUSHIFT; + } + printk("Reserved mem: %8ukB Page sharing: %8ukB\n", + PAGE_KB(reserved), PAGE_KB(sharing)); + printk("Swap cache: %8ukB Free swap: %8ukB\n", + PAGE_KB(swcached), PAGE_KB(nr_swap_pages)); + printk("Page cache: %8ukB Slab cache: %8ukB\n", + PAGE_KB(atomic_read(&page_cache_size)), PAGE_KB(slabbed)); + show_buffers(); +} + +/* References to section boundaries */ + +extern char _text, _etext, _edata, __bss_start, _end; +extern char __init_begin, __init_end; + +static inline void set_pte_phys (unsigned long vaddr, + unsigned long phys, pgprot_t flags) +{ + pgprot_t prot; + pgd_t *pgd; + pmd_t *pmd; + pte_t *pte; + + pgd = swapper_pg_dir + __pgd_offset(vaddr); + if (pgd_none(*pgd)) { + printk("PAE BUG #00!\n"); + return; + } + pmd = pmd_offset(pgd, vaddr); + if (pmd_none(*pmd)) { + printk("PAE BUG #01!\n"); + return; + } + pte = pte_offset(pmd, vaddr); + if (pte_val(*pte)) + pte_ERROR(*pte); + pgprot_val(prot) = pgprot_val(PAGE_KERNEL) | pgprot_val(flags); + set_pte(pte, mk_pte_phys(phys, prot)); + + /* + * It's enough to flush this one mapping. + * (PGE mappings get flushed as well) + */ + __flush_tlb_one(vaddr); +} + +void __set_fixmap (enum fixed_addresses idx, unsigned long phys, pgprot_t flags) +{ + unsigned long address = __fix_to_virt(idx); + + if (idx >= __end_of_fixed_addresses) { + printk("Invalid __set_fixmap\n"); + return; + } + set_pte_phys(address, phys, flags); +} + +static void __init pagetable_init (void) +{ + unsigned long entry, end; + pmd_t *pmd; + pte_t *pte; + int i; + + /* + * It's generally assumed that the user/kernel boundary + * coincides with a pgd entry boundary (4MB, or 1GB if PAE): + * the restriction could be lifted but why bother? just check. + */ +#if (__PAGE_OFFSET & (PGDIR_SIZE - 1)) +#error PAGE_OFFSET should be a multiple of PGDIR_SIZE! +#endif + +#if CONFIG_X86_PAE + /* + * Usually only one page is needed here: if PAGE_OFFSET lowered, + * maybe three pages: need not be contiguous, but might as well. + */ + pmd = (pmd_t *)alloc_bootmem_low_pages(KERNEL_PGD_PTRS*MMUPAGE_SIZE); + for (i = 1; i < USER_PGD_PTRS; i++) + set_pgd(swapper_pg_dir+i, __pgd(1 + __pa(empty_zero_page))); + for (; i < PTRS_PER_PGD; i++, pmd += PTRS_PER_PMD) + set_pgd(swapper_pg_dir+i, __pgd(1 + __pa(pmd))); + /* + * Add low memory identity-mappings - SMP needs it when + * starting up on an AP from real-mode. In the non-PAE + * case we already have these mappings through head.S. + * All user-space mappings are explicitly cleared after + * SMP startup. + */ + swapper_pg_dir[0] = swapper_pg_dir[USER_PGD_PTRS]; +#endif + + /* + * Map in all the low memory pages: using PSE if available, + * or by allocating and populating page tables if no PSE. + */ + pmd = pmd_offset(pgd_offset_k(PAGE_OFFSET), PAGE_OFFSET); + end = max_low_pfn << MMUPAGE_SHIFT; + + if (cpu_has_pse) { + set_in_cr4(X86_CR4_PSE); + boot_cpu_data.wp_works_ok = 1; + entry = _KERNPG_TABLE | _PAGE_PSE; + /* Make it "global" too if supported */ + if (cpu_has_pge) { + entry |= _PAGE_GLOBAL; + set_in_cr4(X86_CR4_PGE); + } + for (; entry < end; pmd++, entry += PTRS_PER_PTE*MMUPAGE_SIZE) + set_pmd(pmd, __pmd(entry)); + } + else for (entry = 0; entry < end; pmd++) { + pte = (pte_t *)alloc_bootmem_low_pages(MMUPAGE_SIZE); + for (i = 0; i < PTRS_PER_PTE; i++, pte++, entry += MMUPAGE_SIZE) { + if (entry >= end) + break; + *pte = mk_pte_phys(entry, PAGE_KERNEL); + } + set_pmd(pmd, __pmd(_KERNPG_TABLE + __pa(pte - i))); + } + + /* + * Leave vmalloc() to create its own page tables as needed, + * but create the page tables at top of virtual memory, to be + * populated by kmap_atomic(), kmap_high() and set_fixmap(). + * kmap_high() assumes pkmap_page_table contiguous throughout. + */ + pmd = pmd_offset(pgd_offset_k(VMALLOC_END), VMALLOC_END); + i = (0UL - (VMALLOC_END & PMD_MASK)) >> PMD_SHIFT; + pte = (pte_t *)alloc_bootmem_low_pages(i*MMUPAGE_SIZE); + for (; --i >= 0; pmd++, pte += PTRS_PER_PTE) + set_pmd(pmd, __pmd(_KERNPG_TABLE + __pa(pte))); + +#ifdef CONFIG_HIGHMEM + /* + * For asm/highmem.h kmap_atomic() and mm/highmem.c kmap_high(). + */ + kmap_prot = PAGE_KERNEL; + kmap_pte = pte_offset(pmd_offset(pgd_offset_k( + KMAP_BASE), KMAP_BASE), KMAP_BASE); + pkmap_page_table = kmap_pte + + ((PKMAP_BASE - KMAP_BASE) >> MMUPAGE_SHIFT); +#endif +} + +void __init zap_low_mappings (void) +{ + int i; + /* + * Zap initial low-memory mappings. + * + * Note that "pgd_clear()" doesn't do it for + * us, because pgd_clear() is a no-op on i386. + */ + for (i = 0; i < USER_PTRS_PER_PGD; i++) +#if CONFIG_X86_PAE + set_pgd(swapper_pg_dir+i, __pgd(1 + __pa(empty_zero_page))); +#else + set_pgd(swapper_pg_dir+i, __pgd(0)); +#endif + flush_tlb_all(); +} + +/* + * paging_init() sets up the page tables - note that the first 8MB are + * already mapped by head.S. + * + * This routines also unmaps the page at virtual kernel address 0, so + * that we can trap those pesky NULL-reference errors in the kernel. + */ +void __init paging_init(void) +{ + pagetable_init(); + + __asm__( "movl %%ecx,%%cr3\n" ::"c"(__pa(swapper_pg_dir))); + +#if CONFIG_X86_PAE + /* + * We will bail out later - printk doesnt work right now so + * the user would just see a hanging kernel. + */ + if (cpu_has_pae) + set_in_cr4(X86_CR4_PAE); +#endif + + __flush_tlb_all(); + + { + unsigned long zones_size[MAX_NR_ZONES] = {0, 0, 0}; + unsigned int max_dma_pfn; + + max_dma_pfn = (MAX_DMA_ADDRESS - PAGE_OFFSET) >> MMUPAGE_SHIFT; + if (max_low_pfn < max_dma_pfn) + zones_size[ZONE_DMA] = max_low_pfn >> PAGE_MMUSHIFT; + else { + zones_size[ZONE_DMA] = max_dma_pfn >> PAGE_MMUSHIFT; + zones_size[ZONE_NORMAL] = + (max_low_pfn - max_dma_pfn) >> PAGE_MMUSHIFT; +#ifdef CONFIG_HIGHMEM + zones_size[ZONE_HIGHMEM] = + (highend_pfn - max_low_pfn) >> PAGE_MMUSHIFT; +#endif + } + free_area_init(zones_size); + } + return; +} + +/* + * Test if the WP bit works in supervisor mode. It isn't supported on 386's + * and also on some strange 486's (NexGen etc.). All 586+'s are OK. The jumps + * before and after the test are here to work-around some nasty CPU bugs. + */ + +/* + * This function cannot be __init, since exceptions don't work in that + * section. + */ +static int do_test_wp_bit(unsigned long vaddr); + +void __init test_wp_bit(void) +{ +/* + * Ok, all PSE-capable CPUs are definitely handling the WP bit right. + */ + const unsigned long vaddr = PAGE_OFFSET; + pgd_t *pgd; + pmd_t *pmd; + pte_t *pte, old_pte; + + printk("Checking if this processor honours the WP bit even in supervisor mode... "); + + pgd = swapper_pg_dir + __pgd_offset(vaddr); + pmd = pmd_offset(pgd, vaddr); + pte = pte_offset(pmd, vaddr); + old_pte = *pte; + *pte = mk_pte_phys(0, PAGE_READONLY); + local_flush_tlb(); + + boot_cpu_data.wp_works_ok = do_test_wp_bit(vaddr); + + *pte = old_pte; + local_flush_tlb(); + + if (!boot_cpu_data.wp_works_ok) { + printk("No.\n"); +#ifdef CONFIG_X86_WP_WORKS_OK + panic("This kernel doesn't support CPU's with broken WP. Recompile it for a 386!"); +#endif + } else { + printk("Ok.\n"); + } +} + +static inline int page_is_ram (unsigned long pagenr) +{ + int i; + + for (i = 0; i < e820.nr_map; i++) { + unsigned long addr, end; + + if (e820.map[i].type != E820_RAM) /* not usable memory */ + continue; + /* + * !!!FIXME!!! Some BIOSen report areas as RAM that + * are not. Notably the 640->1Mb area. We need a sanity + * check here. + */ + addr = (e820.map[i].addr+PAGE_SIZE-1) >> PAGE_SHIFT; + end = (e820.map[i].addr+e820.map[i].size) >> PAGE_SHIFT; + if ((pagenr >= addr) && (pagenr < end)) + return 1; + } + return 0; +} + +void __init mem_init(void) +{ + int codesize, reservedpages, datasize, initsize; + int tmp, top; + + if (!mem_map) + BUG(); + +#ifdef CONFIG_HIGHMEM + max_mapnr = num_physpages = (highend_pfn >> PAGE_MMUSHIFT); +#else + max_mapnr = num_physpages = (max_low_pfn >> PAGE_MMUSHIFT); +#endif + highmem_start_page = mem_map + (max_low_pfn >> PAGE_MMUSHIFT); + high_memory = (void *) __va(max_low_pfn << MMUPAGE_SHIFT); + + /* clear the zero-page */ + memset(empty_zero_page, 0, MMUPAGE_SIZE); + + /* this will put all low memory onto the freelists */ + totalram_pages += free_all_bootmem(); + + if (MMUPAGE_SHIFT) { + /* + * For most purposes, the mmupage empty_zero_page + * would be enough; but if a kiobuf is to be useful + * on whole pages, we need a whole page zeroed. + */ + zero_page = virt_to_page(get_zeroed_page(GFP_ATOMIC)); + SetPageReserved(zero_page); + totalram_pages--; + } + else + zero_page = virt_to_page(empty_zero_page); + + reservedpages = 0; + top = max_low_pfn >> PAGE_MMUSHIFT; + for (tmp = 0; tmp < top; tmp++) { + /* + * Only count reserved RAM pages + */ + if (PageReserved(mem_map+tmp) && page_is_ram(tmp)) + reservedpages++; + } +#ifdef CONFIG_HIGHMEM + top = highend_pfn >> PAGE_MMUSHIFT; + for (tmp = max_low_pfn >> PAGE_MMUSHIFT; tmp < top; tmp++) { + struct page *page = mem_map + tmp; + + if (!page_is_ram(tmp)) { + SetPageReserved(page); + continue; + } + ClearPageReserved(page); + set_bit(PG_highmem, &page->flags); + set_page_count(page, 1); + __free_page(page); + totalhigh_pages++; + } + totalram_pages += totalhigh_pages; +#endif + codesize = (unsigned long) &_etext - (unsigned long) &_text; + datasize = (unsigned long) &_edata - (unsigned long) &_etext; + initsize = (unsigned long) &__init_end - (unsigned long) &__init_begin; + + printk("Memory: %luk/%luk available (%dk kernel code, %dk reserved, %dk data, %dk init, %ldk highmem)\n", + (unsigned long) nr_free_pages() << (PAGE_SHIFT-10), + max_mapnr << (PAGE_SHIFT-10), + codesize >> 10, + reservedpages << (PAGE_SHIFT-10), + datasize >> 10, + initsize >> 10, + (unsigned long) (totalhigh_pages << (PAGE_SHIFT-10)) + ); + +#if CONFIG_X86_PAE + if (!cpu_has_pae) + panic("cannot execute a PAE-enabled kernel on a PAE-less CPU!"); +#endif + if (boot_cpu_data.wp_works_ok < 0) + test_wp_bit(); + + /* + * Subtle. SMP is doing it's boot stuff late (because it has to + * fork idle threads) - but it also needs low mappings for the + * protected-mode entry to work. We zap these entries only after + * the WP-bit has been tested. + */ +#ifndef CONFIG_SMP + zap_low_mappings(); +#endif + +} + +/* Put this after the callers, so that it cannot be inlined */ +static int do_test_wp_bit(unsigned long vaddr) +{ + char tmp_reg; + int flag; + + __asm__ __volatile__( + " movb %0,%1 \n" + "1: movb %1,%0 \n" + " xorl %2,%2 \n" + "2: \n" + ".section __ex_table,\"a\"\n" + " .align 4 \n" + " .long 1b,2b \n" + ".previous \n" + :"=m" (*(char *) vaddr), + "=q" (tmp_reg), + "=r" (flag) + :"2" (1) + :"memory"); + + return flag; +} + +void free_initmem(void) +{ + unsigned long addr; + + addr = (unsigned long)(&__init_begin); + for (; addr < (unsigned long)(&__init_end); addr += PAGE_SIZE) { + ClearPageReserved(virt_to_page(addr)); + set_page_count(virt_to_page(addr), 1); + free_page(addr); + totalram_pages++; + } + printk ("Freeing unused kernel memory: %dk freed\n", (&__init_end - &__init_begin) >> 10); +} + +#ifdef CONFIG_BLK_DEV_INITRD +void free_initrd_mem(unsigned long start, unsigned long end) +{ + if (start < end) + printk ("Freeing initrd memory: %ldk freed\n", (end - start) >> 10); + for (; start < end; start += PAGE_SIZE) { + ClearPageReserved(virt_to_page(start)); + set_page_count(virt_to_page(start), 1); + free_page(start); + totalram_pages++; + } +} +#endif + +void si_meminfo(struct sysinfo *val) +{ + val->totalram = totalram_pages; + val->sharedram = 0; + val->freeram = nr_free_pages(); + val->bufferram = atomic_read(&buffermem_pages); + val->totalhigh = totalhigh_pages; + val->freehigh = nr_free_highpages(); + val->mem_unit = PAGE_SIZE; + return; +} diff -urpN linux-2.4.9-linus/arch/i386/mm/ioremap.c linux-2.4.9-larpage/arch/i386/mm/ioremap.c --- linux-2.4.9-linus/arch/i386/mm/ioremap.c 2001-03-20 08:13:33.000000000 -0800 +++ linux-2.4.9-larpage/arch/i386/mm/ioremap.c 2002-11-20 02:02:26.000000000 -0800 @@ -30,8 +30,8 @@ static inline void remap_area_pte(pte_t } set_pte(pte, mk_pte_phys(phys_addr, __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | _PAGE_ACCESSED | flags))); - address += PAGE_SIZE; - phys_addr += PAGE_SIZE; + address += MMUPAGE_SIZE; + phys_addr += MMUPAGE_SIZE; pte++; } while (address && (address < end)); } @@ -86,7 +86,6 @@ static int remap_area_pages(unsigned lon dir++; } while (address && (address < end)); spin_unlock(&init_mm.page_table_lock); - flush_tlb_all(); return error; } diff -urpN linux-2.4.9-linus/drivers/char/agp/agp.h linux-2.4.9-larpage/drivers/char/agp/agp.h --- linux-2.4.9-linus/drivers/char/agp/agp.h 2001-08-15 01:22:15.000000000 -0700 +++ linux-2.4.9-larpage/drivers/char/agp/agp.h 2002-11-20 02:02:26.000000000 -0800 @@ -93,9 +93,9 @@ struct agp_bridge_data { enum chipset_type type; enum aper_size_type size_type; unsigned long *key_list; - atomic_t current_memory_agp; + atomic_t current_memory_agp; /* in AGP_PAGE_SIZE units */ atomic_t agp_in_use; - int max_memory_agp; /* in number of pages */ + int max_memory_agp; /* in AGP_PAGE_SIZE units */ int needs_scratch_page; int aperture_size_idx; int num_aperture_sizes; diff -urpN linux-2.4.9-linus/drivers/char/agp/agpgart_be.c linux-2.4.9-larpage/drivers/char/agp/agpgart_be.c --- linux-2.4.9-linus/drivers/char/agp/agpgart_be.c 2001-08-15 01:22:15.000000000 -0700 +++ linux-2.4.9-larpage/drivers/char/agp/agpgart_be.c 2002-11-20 02:02:34.000000000 -0800 @@ -162,7 +162,7 @@ static int agp_get_key(void) return -1; } -static agp_memory *agp_create_memory(int scratch_pages) +static agp_memory *agp_create_memory(size_t page_count) { agp_memory *new; @@ -178,14 +178,16 @@ static agp_memory *agp_create_memory(int kfree(new); return NULL; } - new->memory = vmalloc(PAGE_SIZE * scratch_pages); - if (new->memory == NULL) { - agp_free_key(new->key); - kfree(new); - return NULL; + if (page_count) { + new->memory = vmalloc(page_count * sizeof(unsigned long)); + + if (new->memory == NULL) { + agp_free_key(new->key); + kfree(new); + return NULL; + } } - new->num_scratch_pages = scratch_pages; return new; } @@ -203,12 +205,10 @@ void agp_free_memory(agp_memory * curr) agp_bridge.free_by_type(curr); return; } - if (curr->page_count != 0) { - for (i = 0; i < curr->page_count; i++) { - curr->memory[i] &= ~(0x00000fff); - agp_bridge.agp_destroy_page((unsigned long) - phys_to_virt(curr->memory[i])); - } + for (i = 0; i < curr->page_count; i += PAGE_SIZE/AGP_PAGE_SIZE) { + curr->memory[i] &= ~(AGP_PAGE_SIZE-1); + agp_bridge.agp_destroy_page((unsigned long) + phys_to_virt(curr->memory[i])); } agp_free_key(curr->key); vfree(curr->memory); @@ -216,11 +216,8 @@ void agp_free_memory(agp_memory * curr) MOD_DEC_USE_COUNT; } -#define ENTRIES_PER_PAGE (PAGE_SIZE / sizeof(unsigned long)) - agp_memory *agp_allocate_memory(size_t page_count, u32 type) { - int scratch_pages; agp_memory *new; int i; @@ -242,16 +239,18 @@ agp_memory *agp_allocate_memory(size_t p MOD_INC_USE_COUNT; - scratch_pages = (page_count + ENTRIES_PER_PAGE - 1) / ENTRIES_PER_PAGE; - - new = agp_create_memory(scratch_pages); + new = agp_create_memory(page_count); if (new == NULL) { MOD_DEC_USE_COUNT; return NULL; } for (i = 0; i < page_count; i++) { - new->memory[i] = agp_bridge.agp_alloc_page(); + if ((i % (PAGE_SIZE/AGP_PAGE_SIZE)) == 0) + new->memory[i] = agp_bridge.agp_alloc_page(); + else + new->memory[i] = (new->memory[i-1] & + ~(AGP_PAGE_SIZE-1)) + AGP_PAGE_SIZE; if (new->memory[i] == 0) { /* Free this structure */ @@ -558,6 +557,9 @@ static int agp_generic_create_gatt_table break; } + page_order -= (PAGE_SHIFT - AGP_PAGE_SHIFT); + if (page_order < 0) + page_order = 0; table = (char *) __get_free_pages(GFP_KERNEL, page_order); @@ -592,6 +594,9 @@ static int agp_generic_create_gatt_table size = ((aper_size_info_fixed *) temp)->size; page_order = ((aper_size_info_fixed *) temp)->page_order; num_entries = ((aper_size_info_fixed *) temp)->num_entries; + page_order -= (PAGE_SHIFT - AGP_PAGE_SHIFT); + if (page_order < 0) + page_order = 0; table = (char *) __get_free_pages(GFP_KERNEL, page_order); } @@ -665,6 +670,9 @@ static int agp_generic_free_gatt_table(v iounmap(agp_bridge.gatt_table); table = (char *) agp_bridge.gatt_table_real; + page_order -= (PAGE_SHIFT - AGP_PAGE_SHIFT); + if (page_order < 0) + page_order = 0; table_end = table + ((PAGE_SIZE * (1 << page_order)) - 1); for (page = virt_to_page(table); page <= virt_to_page(table_end); page++) @@ -783,7 +791,7 @@ static unsigned long agp_generic_alloc_p } atomic_inc(&virt_to_page(pt)->count); set_bit(PG_locked, &virt_to_page(pt)->flags); - atomic_inc(&agp_bridge.current_memory_agp); + atomic_add(PAGE_SIZE/AGP_PAGE_SIZE, &agp_bridge.current_memory_agp); return (unsigned long) pt; } @@ -798,7 +806,7 @@ static void agp_generic_destroy_page(uns clear_bit(PG_locked, &virt_to_page(pt)->flags); wake_up(&virt_to_page(pt)->wait); free_page((unsigned long) pt); - atomic_dec(&agp_bridge.current_memory_agp); + atomic_sub(PAGE_SIZE/AGP_PAGE_SIZE, &agp_bridge.current_memory_agp); } /* End Basic Page Allocation Routines */ @@ -874,7 +882,7 @@ static int intel_i810_configure(void) temp &= 0xfff80000; intel_i810_private.registers = - (volatile u8 *) ioremap(temp, 128 * 4096); + (volatile u8 *) ioremap(temp, 128 * AGP_PAGE_SIZE); if ((INREG32(intel_i810_private.registers, I810_DRAM_CTL) & I810_DRAM_ROW_0) == I810_DRAM_ROW_0_SDRAM) { @@ -941,7 +949,7 @@ static int intel_i810_insert_entries(agp i < (pg_start + mem->page_count); i++) { OUTREG32(intel_i810_private.registers, I810_PTE_BASE + (i * 4), - (i * 4096) | I810_PTE_LOCAL | + (i * AGP_PAGE_SIZE) | I810_PTE_LOCAL | I810_PTE_VALID); } CACHE_FLUSH(); @@ -991,15 +999,13 @@ static agp_memory *intel_i810_alloc_by_t if (pg_count != intel_i810_private.num_dcache_entries) { return NULL; } - new = agp_create_memory(1); + new = agp_create_memory(0); if (new == NULL) { return NULL; } new->type = AGP_DCACHE_MEMORY; new->page_count = pg_count; - new->num_scratch_pages = 0; - vfree(new->memory); MOD_INC_USE_COUNT; return new; } @@ -1030,7 +1036,6 @@ static agp_memory *intel_i810_alloc_by_t virt_to_phys((void *) new->memory[0]), type); new->page_count = 1; - new->num_scratch_pages = 1; new->type = AGP_PHYS_MEMORY; new->physical = virt_to_phys((void *) new->memory[0]); return new; @@ -1603,7 +1608,7 @@ static int amd_create_page_map(amd_page_ } CACHE_FLUSH(); - for(i = 0; i < PAGE_SIZE / sizeof(unsigned long); i++) { + for(i = 0; i < AGP_PAGE_SIZE / sizeof(unsigned long); i++) { page_map->remapped[i] = agp_bridge.scratch_page; } @@ -1628,7 +1633,8 @@ static void amd_free_gatt_pages(void) for(i = 0; i < amd_irongate_private.num_tables; i++) { entry = tables[i]; if (entry != NULL) { - if (entry->real != NULL) { + if (entry->real != NULL && + (i % (PAGE_SIZE/AGP_PAGE_SIZE)) == 0) { amd_free_page_map(entry); } kfree(entry); @@ -1658,8 +1664,13 @@ static int amd_create_gatt_pages(int nr_ } memset(entry, 0, sizeof(amd_page_map)); tables[i] = entry; - retval = amd_create_page_map(entry); - if (retval != 0) break; + if ((i % (PAGE_SIZE/AGP_PAGE_SIZE)) == 0) { + retval = amd_create_page_map(entry); + if (retval != 0) break; + } else { + entry->real = tables[i-1]->real + 1024; + entry->remapped = tables[i-1]->remapped + 1024; + } } amd_irongate_private.num_tables = nr_tables; amd_irongate_private.gatt_pages = tables; @@ -1769,7 +1780,7 @@ static int amd_irongate_configure(void) /* Get the memory mapped registers */ pci_read_config_dword(agp_bridge.dev, AMD_MMBASE, &temp); temp = (temp & PCI_BASE_ADDRESS_MEM_MASK); - amd_irongate_private.registers = (volatile u8 *) ioremap(temp, 4096); + amd_irongate_private.registers = (volatile u8 *) ioremap(temp, AGP_PAGE_SIZE); /* Write out the address of the gatt table */ OUTREG32(amd_irongate_private.registers, AMD_ATTBASE, @@ -1855,7 +1866,7 @@ static int amd_insert_memory(agp_memory j = pg_start; while (j < (pg_start + mem->page_count)) { - addr = (j * PAGE_SIZE) + agp_bridge.gart_bus_addr; + addr = (j * AGP_PAGE_SIZE) + agp_bridge.gart_bus_addr; cur_gatt = GET_GATT(addr); if (!PGE_EMPTY(cur_gatt[GET_GATT_OFF(addr)])) { return -EBUSY; @@ -1869,7 +1880,7 @@ static int amd_insert_memory(agp_memory } for (i = 0, j = pg_start; i < mem->page_count; i++, j++) { - addr = (j * PAGE_SIZE) + agp_bridge.gart_bus_addr; + addr = (j * AGP_PAGE_SIZE) + agp_bridge.gart_bus_addr; cur_gatt = GET_GATT(addr); cur_gatt[GET_GATT_OFF(addr)] = mem->memory[i]; } @@ -1888,7 +1899,7 @@ static int amd_remove_memory(agp_memory return -EINVAL; } for (i = pg_start; i < (mem->page_count + pg_start); i++) { - addr = (i * PAGE_SIZE) + agp_bridge.gart_bus_addr; + addr = (i * AGP_PAGE_SIZE) + agp_bridge.gart_bus_addr; cur_gatt = GET_GATT(addr); cur_gatt[GET_GATT_OFF(addr)] = (unsigned long) agp_bridge.scratch_page; @@ -2080,7 +2091,7 @@ static void ali_cache_flush(void) u32 temp; page_count = 1 << A_SIZE_32(agp_bridge.current_size)->page_order; - for (i = 0; i < PAGE_SIZE * page_count; i += PAGE_SIZE) { + for (i = 0; i < AGP_PAGE_SIZE * page_count; i += AGP_PAGE_SIZE) { pci_read_config_dword(agp_bridge.dev, ALI_CACHE_FLUSH_CTRL, &temp); pci_write_config_dword(agp_bridge.dev, ALI_CACHE_FLUSH_CTRL, (((temp & ALI_CACHE_FLUSH_ADDR_MASK) | @@ -2094,6 +2105,7 @@ static unsigned long ali_alloc_page(void { void *pt; u32 temp; + int i; pt = (void *) __get_free_page(GFP_KERNEL); if (pt == NULL) @@ -2101,24 +2113,27 @@ static unsigned long ali_alloc_page(void atomic_inc(&virt_to_page(pt)->count); set_bit(PG_locked, &virt_to_page(pt)->flags); - atomic_inc(&agp_bridge.current_memory_agp); + atomic_add(PAGE_SIZE/AGP_PAGE_SIZE, &agp_bridge.current_memory_agp); global_cache_flush(); if (agp_bridge.type == ALI_M1541) { - pci_read_config_dword(agp_bridge.dev, ALI_CACHE_FLUSH_CTRL, &temp); - pci_write_config_dword(agp_bridge.dev, ALI_CACHE_FLUSH_CTRL, - (((temp & ALI_CACHE_FLUSH_ADDR_MASK) | - virt_to_phys((void *)pt)) | - ALI_CACHE_FLUSH_EN )); + for (i = 0; i < PAGE_SIZE/AGP_PAGE_SIZE; i++) { + pci_read_config_dword(agp_bridge.dev, ALI_CACHE_FLUSH_CTRL, &temp); + pci_write_config_dword(agp_bridge.dev, ALI_CACHE_FLUSH_CTRL, + (((temp & ALI_CACHE_FLUSH_ADDR_MASK) | + virt_to_phys(pt + i*AGP_PAGE_SIZE)) | + ALI_CACHE_FLUSH_EN )); + } } return (unsigned long) pt; } static void ali_destroy_page(unsigned long page) { - u32 temp; void *pt = (void *) page; + u32 temp; + int i; if (pt == NULL) return; @@ -2126,18 +2141,20 @@ static void ali_destroy_page(unsigned lo global_cache_flush(); if (agp_bridge.type == ALI_M1541) { - pci_read_config_dword(agp_bridge.dev, ALI_CACHE_FLUSH_CTRL, &temp); - pci_write_config_dword(agp_bridge.dev, ALI_CACHE_FLUSH_CTRL, - (((temp & ALI_CACHE_FLUSH_ADDR_MASK) | - virt_to_phys((void *)pt)) | - ALI_CACHE_FLUSH_EN)); + for (i = 0; i < PAGE_SIZE/AGP_PAGE_SIZE; i++) { + pci_read_config_dword(agp_bridge.dev, ALI_CACHE_FLUSH_CTRL, &temp); + pci_write_config_dword(agp_bridge.dev, ALI_CACHE_FLUSH_CTRL, + (((temp & ALI_CACHE_FLUSH_ADDR_MASK) | + virt_to_phys(pt + i*AGP_PAGE_SIZE)) | + ALI_CACHE_FLUSH_EN)); + } } atomic_dec(&virt_to_page(pt)->count); clear_bit(PG_locked, &virt_to_page(pt)->flags); wake_up(&virt_to_page(pt)->wait); free_page((unsigned long) pt); - atomic_dec(&agp_bridge.current_memory_agp); + atomic_sub(PAGE_SIZE/AGP_PAGE_SIZE, &agp_bridge.current_memory_agp); } /* Setup function */ @@ -2227,7 +2244,7 @@ static int serverworks_create_page_map(s } CACHE_FLUSH(); - for(i = 0; i < PAGE_SIZE / sizeof(unsigned long); i++) { + for(i = 0; i < AGP_PAGE_SIZE / sizeof(unsigned long); i++) { page_map->remapped[i] = agp_bridge.scratch_page; } @@ -2252,7 +2269,8 @@ static void serverworks_free_gatt_pages( for(i = 0; i < serverworks_private.num_tables; i++) { entry = tables[i]; if (entry != NULL) { - if (entry->real != NULL) { + if (entry->real != NULL && + (i % (PAGE_SIZE/AGP_PAGE_SIZE)) == 0) { serverworks_free_page_map(entry); } kfree(entry); @@ -2282,8 +2300,13 @@ static int serverworks_create_gatt_pages } memset(entry, 0, sizeof(serverworks_page_map)); tables[i] = entry; - retval = serverworks_create_page_map(entry); - if (retval != 0) break; + if ((i % (PAGE_SIZE/AGP_PAGE_SIZE)) == 0) { + retval = serverworks_create_page_map(entry); + if (retval != 0) break; + } else { + entry->real = tables[i-1]->real + 1024; + entry->remapped = tables[i-1]->remapped + 1024; + } } serverworks_private.num_tables = nr_tables; serverworks_private.gatt_pages = tables; @@ -2324,11 +2347,12 @@ static int serverworks_create_gatt_table } retval = serverworks_create_page_map(&serverworks_private.scratch_dir); if (retval != 0) { + serverworks_free_page_map(&serverworks_private.scratch_dir); serverworks_free_page_map(&page_dir); return retval; } /* Create a fake scratch directory */ - for(i = 0; i < 1024; i++) { + for(i = 0; i < AGP_PAGE_SIZE / sizeof(unsigned long); i++) { serverworks_private.scratch_dir.remapped[i] = (unsigned long) agp_bridge.scratch_page; page_dir.remapped[i] = virt_to_bus(serverworks_private.scratch_dir.real); @@ -2375,6 +2399,7 @@ static int serverworks_free_gatt_table(v page_dir.remapped = agp_bridge.gatt_table; serverworks_free_gatt_pages(); + serverworks_free_page_map(&serverworks_private.scratch_dir); serverworks_free_page_map(&page_dir); serverworks_free_page_map(&serverworks_private.scratch_dir); return 0; @@ -2549,7 +2574,7 @@ static int serverworks_insert_memory(agp j = pg_start; while (j < (pg_start + mem->page_count)) { - addr = (j * PAGE_SIZE) + agp_bridge.gart_bus_addr; + addr = (j * AGP_PAGE_SIZE) + agp_bridge.gart_bus_addr; cur_gatt = SVRWRKS_GET_GATT(addr); if (!PGE_EMPTY(cur_gatt[GET_GATT_OFF(addr)])) { return -EBUSY; @@ -2563,7 +2588,7 @@ static int serverworks_insert_memory(agp } for (i = 0, j = pg_start; i < mem->page_count; i++, j++) { - addr = (j * PAGE_SIZE) + agp_bridge.gart_bus_addr; + addr = (j * AGP_PAGE_SIZE) + agp_bridge.gart_bus_addr; cur_gatt = SVRWRKS_GET_GATT(addr); cur_gatt[GET_GATT_OFF(addr)] = mem->memory[i]; } @@ -2586,7 +2611,7 @@ static int serverworks_remove_memory(agp agp_bridge.tlb_flush(mem); for (i = pg_start; i < (mem->page_count + pg_start); i++) { - addr = (i * PAGE_SIZE) + agp_bridge.gart_bus_addr; + addr = (i * AGP_PAGE_SIZE) + agp_bridge.gart_bus_addr; cur_gatt = SVRWRKS_GET_GATT(addr); cur_gatt[GET_GATT_OFF(addr)] = (unsigned long) agp_bridge.scratch_page; @@ -3303,8 +3328,9 @@ static int __init agp_find_max (void) printk(KERN_INFO PFX "Maximum main memory to use " "for agp memory: %ldM\n", result); - result = result << (20 - PAGE_SHIFT); - return result; + result <<= (20 - PAGE_SHIFT); /* convert to pages */ + result *= PAGE_SIZE / AGP_PAGE_SIZE; /* convert to AGP pages */ + return result; } #define AGPGART_VERSION_MAJOR 0 @@ -3361,7 +3387,7 @@ static int __init agp_backend_initialize } got_gatt = 1; - agp_bridge.key_list = vmalloc(PAGE_SIZE * 4); + agp_bridge.key_list = vmalloc(MAXKEY/8); if (agp_bridge.key_list == NULL) { printk(KERN_ERR PFX "error allocating memory for key lists.\n"); rc = -ENOMEM; @@ -3370,7 +3396,7 @@ static int __init agp_backend_initialize got_keylist = 1; /* FIXME vmalloc'd memory not guaranteed contiguous */ - memset(agp_bridge.key_list, 0, PAGE_SIZE * 4); + memset(agp_bridge.key_list, 0, MAXKEY/8); if (agp_bridge.configure()) { printk(KERN_ERR PFX "error configuring host chipset.\n"); @@ -3385,7 +3411,7 @@ static int __init agp_backend_initialize err_out: if (agp_bridge.needs_scratch_page == TRUE) { - agp_bridge.scratch_page &= ~(0x00000fff); + agp_bridge.scratch_page &= ~(AGP_PAGE_SIZE-1); agp_bridge.agp_destroy_page((unsigned long) phys_to_virt(agp_bridge.scratch_page)); } @@ -3405,7 +3431,7 @@ static void agp_backend_cleanup(void) vfree(agp_bridge.key_list); if (agp_bridge.needs_scratch_page == TRUE) { - agp_bridge.scratch_page &= ~(0x00000fff); + agp_bridge.scratch_page &= ~(AGP_PAGE_SIZE-1); agp_bridge.agp_destroy_page((unsigned long) phys_to_virt(agp_bridge.scratch_page)); } diff -urpN linux-2.4.9-linus/drivers/char/agp/agpgart_be.c.orig linux-2.4.9-larpage/drivers/char/agp/agpgart_be.c.orig --- linux-2.4.9-linus/drivers/char/agp/agpgart_be.c.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/drivers/char/agp/agpgart_be.c.orig 2002-11-20 02:02:34.000000000 -0800 @@ -0,0 +1,3485 @@ +/* + * AGPGART module version 0.99 + * Copyright (C) 1999 Jeff Hartmann + * Copyright (C) 1999 Precision Insight, Inc. + * Copyright (C) 1999 Xi Graphics, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included + * in all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS + * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * JEFF HARTMANN, OR ANY OTHER CONTRIBUTORS BE LIABLE FOR ANY CLAIM, + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE + * OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + * + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include "agp.h" + +MODULE_AUTHOR("Jeff Hartmann "); +MODULE_PARM(agp_try_unsupported, "1i"); +EXPORT_SYMBOL(agp_free_memory); +EXPORT_SYMBOL(agp_allocate_memory); +EXPORT_SYMBOL(agp_copy_info); +EXPORT_SYMBOL(agp_bind_memory); +EXPORT_SYMBOL(agp_unbind_memory); +EXPORT_SYMBOL(agp_enable); +EXPORT_SYMBOL(agp_backend_acquire); +EXPORT_SYMBOL(agp_backend_release); + +static void flush_cache(void); + +static struct agp_bridge_data agp_bridge; +static int agp_try_unsupported __initdata = 0; + + +static inline void flush_cache(void) +{ +#if defined(__i386__) + asm volatile ("wbinvd":::"memory"); +#elif defined(__alpha__) || defined(__ia64__) || defined(__sparc__) + /* ??? I wonder if we'll really need to flush caches, or if the + core logic can manage to keep the system coherent. The ARM + speaks only of using `cflush' to get things in memory in + preparation for power failure. + + If we do need to call `cflush', we'll need a target page, + as we can only flush one page at a time. + + Ditto for IA-64. --davidm 00/08/07 */ + mb(); +#else +#error "Please define flush_cache." +#endif +} + +#ifdef CONFIG_SMP +static atomic_t cpus_waiting; + +static void ipi_handler(void *null) +{ + flush_cache(); + atomic_dec(&cpus_waiting); + while (atomic_read(&cpus_waiting) > 0) + barrier(); +} + +static void smp_flush_cache(void) +{ + atomic_set(&cpus_waiting, smp_num_cpus - 1); + if (smp_call_function(ipi_handler, NULL, 1, 0) != 0) + panic(PFX "timed out waiting for the other CPUs!\n"); + flush_cache(); + while (atomic_read(&cpus_waiting) > 0) + barrier(); +} +#define global_cache_flush smp_flush_cache +#else /* CONFIG_SMP */ +#define global_cache_flush flush_cache +#endif /* CONFIG_SMP */ + +int agp_backend_acquire(void) +{ + if (agp_bridge.type == NOT_SUPPORTED) { + return -EINVAL; + } + atomic_inc(&agp_bridge.agp_in_use); + + if (atomic_read(&agp_bridge.agp_in_use) != 1) { + atomic_dec(&agp_bridge.agp_in_use); + return -EBUSY; + } + MOD_INC_USE_COUNT; + return 0; +} + +void agp_backend_release(void) +{ + if (agp_bridge.type == NOT_SUPPORTED) { + return; + } + atomic_dec(&agp_bridge.agp_in_use); + MOD_DEC_USE_COUNT; +} + +/* + * Generic routines for handling agp_memory structures - + * They use the basic page allocation routines to do the + * brunt of the work. + */ + + +static void agp_free_key(int key) +{ + + if (key < 0) { + return; + } + if (key < MAXKEY) { + clear_bit(key, agp_bridge.key_list); + } +} + +static int agp_get_key(void) +{ + int bit; + + bit = find_first_zero_bit(agp_bridge.key_list, MAXKEY); + if (bit < MAXKEY) { + set_bit(bit, agp_bridge.key_list); + return bit; + } + return -1; +} + +static agp_memory *agp_create_memory(size_t page_count) +{ + agp_memory *new; + + new = kmalloc(sizeof(agp_memory), GFP_KERNEL); + + if (new == NULL) { + return NULL; + } + memset(new, 0, sizeof(agp_memory)); + new->key = agp_get_key(); + + if (new->key < 0) { + kfree(new); + return NULL; + } + + if (page_count) { + new->memory = vmalloc(page_count * sizeof(unsigned long)); + + if (new->memory == NULL) { + agp_free_key(new->key); + kfree(new); + return NULL; + } + } + return new; +} + +void agp_free_memory(agp_memory * curr) +{ + int i; + + if ((agp_bridge.type == NOT_SUPPORTED) || (curr == NULL)) { + return; + } + if (curr->is_bound == TRUE) { + agp_unbind_memory(curr); + } + if (curr->type != 0) { + agp_bridge.free_by_type(curr); + return; + } + for (i = 0; i < curr->page_count; i += PAGE_SIZE/AGP_PAGE_SIZE) { + curr->memory[i] &= ~(AGP_PAGE_SIZE-1); + agp_bridge.agp_destroy_page((unsigned long) + phys_to_virt(curr->memory[i])); + } + agp_free_key(curr->key); + vfree(curr->memory); + kfree(curr); + MOD_DEC_USE_COUNT; +} + +agp_memory *agp_allocate_memory(size_t page_count, u32 type) +{ + agp_memory *new; + int i; + + if (agp_bridge.type == NOT_SUPPORTED) { + return NULL; + } + if ((atomic_read(&agp_bridge.current_memory_agp) + page_count) > + agp_bridge.max_memory_agp) { + return NULL; + } + + if (type != 0) { + new = agp_bridge.alloc_by_type(page_count, type); + return new; + } + /* We always increase the module count, since free auto-decrements + * it + */ + + MOD_INC_USE_COUNT; + + new = agp_create_memory(page_count); + + if (new == NULL) { + MOD_DEC_USE_COUNT; + return NULL; + } + for (i = 0; i < page_count; i++) { + if ((i % (PAGE_SIZE/AGP_PAGE_SIZE)) == 0) + new->memory[i] = agp_bridge.agp_alloc_page(); + else + new->memory[i] = (new->memory[i-1] & + ~(AGP_PAGE_SIZE-1)) + AGP_PAGE_SIZE; + + if (new->memory[i] == 0) { + /* Free this structure */ + agp_free_memory(new); + return NULL; + } + new->memory[i] = + agp_bridge.mask_memory( + virt_to_phys((void *) new->memory[i]), + type); + new->page_count++; + } + + return new; +} + +/* End - Generic routines for handling agp_memory structures */ + +static int agp_return_size(void) +{ + int current_size; + void *temp; + + temp = agp_bridge.current_size; + + switch (agp_bridge.size_type) { + case U8_APER_SIZE: + current_size = A_SIZE_8(temp)->size; + break; + case U16_APER_SIZE: + current_size = A_SIZE_16(temp)->size; + break; + case U32_APER_SIZE: + current_size = A_SIZE_32(temp)->size; + break; + case LVL2_APER_SIZE: + current_size = A_SIZE_LVL2(temp)->size; + break; + case FIXED_APER_SIZE: + current_size = A_SIZE_FIX(temp)->size; + break; + default: + current_size = 0; + break; + } + + return current_size; +} + +/* Routine to copy over information structure */ + +void agp_copy_info(agp_kern_info * info) +{ + memset(info, 0, sizeof(agp_kern_info)); + if (agp_bridge.type == NOT_SUPPORTED) { + info->chipset = agp_bridge.type; + return; + } + info->version.major = agp_bridge.version->major; + info->version.minor = agp_bridge.version->minor; + info->device = agp_bridge.dev; + info->chipset = agp_bridge.type; + info->mode = agp_bridge.mode; + info->aper_base = agp_bridge.gart_bus_addr; + info->aper_size = agp_return_size(); + info->max_memory = agp_bridge.max_memory_agp; + info->current_memory = atomic_read(&agp_bridge.current_memory_agp); +} + +/* End - Routine to copy over information structure */ + +/* + * Routines for handling swapping of agp_memory into the GATT - + * These routines take agp_memory and insert them into the GATT. + * They call device specific routines to actually write to the GATT. + */ + +int agp_bind_memory(agp_memory * curr, off_t pg_start) +{ + int ret_val; + + if ((agp_bridge.type == NOT_SUPPORTED) || + (curr == NULL) || (curr->is_bound == TRUE)) { + return -EINVAL; + } + if (curr->is_flushed == FALSE) { + CACHE_FLUSH(); + curr->is_flushed = TRUE; + } + ret_val = agp_bridge.insert_memory(curr, pg_start, curr->type); + + if (ret_val != 0) { + return ret_val; + } + curr->is_bound = TRUE; + curr->pg_start = pg_start; + return 0; +} + +int agp_unbind_memory(agp_memory * curr) +{ + int ret_val; + + if ((agp_bridge.type == NOT_SUPPORTED) || (curr == NULL)) { + return -EINVAL; + } + if (curr->is_bound != TRUE) { + return -EINVAL; + } + ret_val = agp_bridge.remove_memory(curr, curr->pg_start, curr->type); + + if (ret_val != 0) { + return ret_val; + } + curr->is_bound = FALSE; + curr->pg_start = 0; + return 0; +} + +/* End - Routines for handling swapping of agp_memory into the GATT */ + +/* + * Driver routines - start + * Currently this module supports the following chipsets: + * i810, 440lx, 440bx, 440gx, i840, i850, via vp3, via mvp3, via kx133, + * via kt133, amd irongate, ALi M1541, and generic support for the SiS + * chipsets. + */ + +/* Generic Agp routines - Start */ + +static void agp_generic_agp_enable(u32 mode) +{ + struct pci_dev *device = NULL; + u32 command, scratch, cap_id; + u8 cap_ptr; + + pci_read_config_dword(agp_bridge.dev, + agp_bridge.capndx + 4, + &command); + + /* + * PASS1: go throu all devices that claim to be + * AGP devices and collect their data. + */ + + while ((device = pci_find_class(PCI_CLASS_DISPLAY_VGA << 8, + device)) != NULL) { + pci_read_config_dword(device, 0x04, &scratch); + + if (!(scratch & 0x00100000)) + continue; + + pci_read_config_byte(device, 0x34, &cap_ptr); + + if (cap_ptr != 0x00) { + do { + pci_read_config_dword(device, + cap_ptr, &cap_id); + + if ((cap_id & 0xff) != 0x02) + cap_ptr = (cap_id >> 8) & 0xff; + } + while (((cap_id & 0xff) != 0x02) && (cap_ptr != 0x00)); + } + if (cap_ptr != 0x00) { + /* + * Ok, here we have a AGP device. Disable impossible + * settings, and adjust the readqueue to the minimum. + */ + + pci_read_config_dword(device, cap_ptr + 4, &scratch); + + /* adjust RQ depth */ + command = + ((command & ~0xff000000) | + min(u32, (mode & 0xff000000), + min(u32, (command & 0xff000000), + (scratch & 0xff000000)))); + + /* disable SBA if it's not supported */ + if (!((command & 0x00000200) && + (scratch & 0x00000200) && + (mode & 0x00000200))) + command &= ~0x00000200; + + /* disable FW if it's not supported */ + if (!((command & 0x00000010) && + (scratch & 0x00000010) && + (mode & 0x00000010))) + command &= ~0x00000010; + + if (!((command & 4) && + (scratch & 4) && + (mode & 4))) + command &= ~0x00000004; + + if (!((command & 2) && + (scratch & 2) && + (mode & 2))) + command &= ~0x00000002; + + if (!((command & 1) && + (scratch & 1) && + (mode & 1))) + command &= ~0x00000001; + } + } + /* + * PASS2: Figure out the 4X/2X/1X setting and enable the + * target (our motherboard chipset). + */ + + if (command & 4) { + command &= ~3; /* 4X */ + } + if (command & 2) { + command &= ~5; /* 2X */ + } + if (command & 1) { + command &= ~6; /* 1X */ + } + command |= 0x00000100; + + pci_write_config_dword(agp_bridge.dev, + agp_bridge.capndx + 8, + command); + + /* + * PASS3: Go throu all AGP devices and update the + * command registers. + */ + + while ((device = pci_find_class(PCI_CLASS_DISPLAY_VGA << 8, + device)) != NULL) { + pci_read_config_dword(device, 0x04, &scratch); + + if (!(scratch & 0x00100000)) + continue; + + pci_read_config_byte(device, 0x34, &cap_ptr); + + if (cap_ptr != 0x00) { + do { + pci_read_config_dword(device, + cap_ptr, &cap_id); + + if ((cap_id & 0xff) != 0x02) + cap_ptr = (cap_id >> 8) & 0xff; + } + while (((cap_id & 0xff) != 0x02) && (cap_ptr != 0x00)); + } + if (cap_ptr != 0x00) + pci_write_config_dword(device, cap_ptr + 8, command); + } +} + +static int agp_generic_create_gatt_table(void) +{ + char *table; + char *table_end; + int size; + int page_order; + int num_entries; + int i; + void *temp; + struct page *page; + + /* The generic routines can't handle 2 level gatt's */ + if (agp_bridge.size_type == LVL2_APER_SIZE) { + return -EINVAL; + } + + table = NULL; + i = agp_bridge.aperture_size_idx; + temp = agp_bridge.current_size; + size = page_order = num_entries = 0; + + if (agp_bridge.size_type != FIXED_APER_SIZE) { + do { + switch (agp_bridge.size_type) { + case U8_APER_SIZE: + size = A_SIZE_8(temp)->size; + page_order = + A_SIZE_8(temp)->page_order; + num_entries = + A_SIZE_8(temp)->num_entries; + break; + case U16_APER_SIZE: + size = A_SIZE_16(temp)->size; + page_order = A_SIZE_16(temp)->page_order; + num_entries = A_SIZE_16(temp)->num_entries; + break; + case U32_APER_SIZE: + size = A_SIZE_32(temp)->size; + page_order = A_SIZE_32(temp)->page_order; + num_entries = A_SIZE_32(temp)->num_entries; + break; + /* This case will never really happen. */ + case FIXED_APER_SIZE: + case LVL2_APER_SIZE: + default: + size = page_order = num_entries = 0; + break; + } + + page_order -= (PAGE_SHIFT - AGP_PAGE_SHIFT); + if (page_order < 0) + page_order = 0; + table = (char *) __get_free_pages(GFP_KERNEL, + page_order); + + if (table == NULL) { + i++; + switch (agp_bridge.size_type) { + case U8_APER_SIZE: + agp_bridge.current_size = A_IDX8(); + break; + case U16_APER_SIZE: + agp_bridge.current_size = A_IDX16(); + break; + case U32_APER_SIZE: + agp_bridge.current_size = A_IDX32(); + break; + /* This case will never really + * happen. + */ + case FIXED_APER_SIZE: + case LVL2_APER_SIZE: + default: + agp_bridge.current_size = + agp_bridge.current_size; + break; + } + } else { + agp_bridge.aperture_size_idx = i; + } + } while ((table == NULL) && + (i < agp_bridge.num_aperture_sizes)); + } else { + size = ((aper_size_info_fixed *) temp)->size; + page_order = ((aper_size_info_fixed *) temp)->page_order; + num_entries = ((aper_size_info_fixed *) temp)->num_entries; + page_order -= (PAGE_SHIFT - AGP_PAGE_SHIFT); + if (page_order < 0) + page_order = 0; + table = (char *) __get_free_pages(GFP_KERNEL, page_order); + } + + if (table == NULL) { + return -ENOMEM; + } + table_end = table + ((PAGE_SIZE * (1 << page_order)) - 1); + + for (page = virt_to_page(table); page <= virt_to_page(table_end); page++) + set_bit(PG_reserved, &page->flags); + + agp_bridge.gatt_table_real = (unsigned long *) table; + CACHE_FLUSH(); + agp_bridge.gatt_table = ioremap_nocache(virt_to_phys(table), + (PAGE_SIZE * (1 << page_order))); + CACHE_FLUSH(); + + if (agp_bridge.gatt_table == NULL) { + for (page = virt_to_page(table); page <= virt_to_page(table_end); page++) + clear_bit(PG_reserved, &page->flags); + + free_pages((unsigned long) table, page_order); + + return -ENOMEM; + } + agp_bridge.gatt_bus_addr = virt_to_phys(agp_bridge.gatt_table_real); + + for (i = 0; i < num_entries; i++) { + agp_bridge.gatt_table[i] = + (unsigned long) agp_bridge.scratch_page; + } + + return 0; +} + +static int agp_generic_free_gatt_table(void) +{ + int page_order; + char *table, *table_end; + void *temp; + struct page *page; + + temp = agp_bridge.current_size; + + switch (agp_bridge.size_type) { + case U8_APER_SIZE: + page_order = A_SIZE_8(temp)->page_order; + break; + case U16_APER_SIZE: + page_order = A_SIZE_16(temp)->page_order; + break; + case U32_APER_SIZE: + page_order = A_SIZE_32(temp)->page_order; + break; + case FIXED_APER_SIZE: + page_order = A_SIZE_FIX(temp)->page_order; + break; + case LVL2_APER_SIZE: + /* The generic routines can't deal with 2 level gatt's */ + return -EINVAL; + break; + default: + page_order = 0; + break; + } + + /* Do not worry about freeing memory, because if this is + * called, then all agp memory is deallocated and removed + * from the table. + */ + + iounmap(agp_bridge.gatt_table); + table = (char *) agp_bridge.gatt_table_real; + page_order -= (PAGE_SHIFT - AGP_PAGE_SHIFT); + if (page_order < 0) + page_order = 0; + table_end = table + ((PAGE_SIZE * (1 << page_order)) - 1); + + for (page = virt_to_page(table); page <= virt_to_page(table_end); page++) + clear_bit(PG_reserved, &page->flags); + + free_pages((unsigned long) agp_bridge.gatt_table_real, page_order); + return 0; +} + +static int agp_generic_insert_memory(agp_memory * mem, + off_t pg_start, int type) +{ + int i, j, num_entries; + void *temp; + + temp = agp_bridge.current_size; + + switch (agp_bridge.size_type) { + case U8_APER_SIZE: + num_entries = A_SIZE_8(temp)->num_entries; + break; + case U16_APER_SIZE: + num_entries = A_SIZE_16(temp)->num_entries; + break; + case U32_APER_SIZE: + num_entries = A_SIZE_32(temp)->num_entries; + break; + case FIXED_APER_SIZE: + num_entries = A_SIZE_FIX(temp)->num_entries; + break; + case LVL2_APER_SIZE: + /* The generic routines can't deal with 2 level gatt's */ + return -EINVAL; + break; + default: + num_entries = 0; + break; + } + + if (type != 0 || mem->type != 0) { + /* The generic routines know nothing of memory types */ + return -EINVAL; + } + if ((pg_start + mem->page_count) > num_entries) { + return -EINVAL; + } + j = pg_start; + + while (j < (pg_start + mem->page_count)) { + if (!PGE_EMPTY(agp_bridge.gatt_table[j])) { + return -EBUSY; + } + j++; + } + + if (mem->is_flushed == FALSE) { + CACHE_FLUSH(); + mem->is_flushed = TRUE; + } + for (i = 0, j = pg_start; i < mem->page_count; i++, j++) { + agp_bridge.gatt_table[j] = mem->memory[i]; + } + + agp_bridge.tlb_flush(mem); + return 0; +} + +static int agp_generic_remove_memory(agp_memory * mem, off_t pg_start, + int type) +{ + int i; + + if (type != 0 || mem->type != 0) { + /* The generic routines know nothing of memory types */ + return -EINVAL; + } + for (i = pg_start; i < (mem->page_count + pg_start); i++) { + agp_bridge.gatt_table[i] = + (unsigned long) agp_bridge.scratch_page; + } + + agp_bridge.tlb_flush(mem); + return 0; +} + +static agp_memory *agp_generic_alloc_by_type(size_t page_count, int type) +{ + return NULL; +} + +static void agp_generic_free_by_type(agp_memory * curr) +{ + if (curr->memory != NULL) { + vfree(curr->memory); + } + agp_free_key(curr->key); + kfree(curr); +} + +/* + * Basic Page Allocation Routines - + * These routines handle page allocation + * and by default they reserve the allocated + * memory. They also handle incrementing the + * current_memory_agp value, Which is checked + * against a maximum value. + */ + +static unsigned long agp_generic_alloc_page(void) +{ + void *pt; + + pt = (void *) __get_free_page(GFP_KERNEL); + if (pt == NULL) { + return 0; + } + atomic_inc(&virt_to_page(pt)->count); + set_bit(PG_locked, &virt_to_page(pt)->flags); + atomic_add(PAGE_SIZE/AGP_PAGE_SIZE, &agp_bridge.current_memory_agp); + return (unsigned long) pt; +} + +static void agp_generic_destroy_page(unsigned long page) +{ + void *pt = (void *) page; + + if (pt == NULL) { + return; + } + atomic_dec(&virt_to_page(pt)->count); + clear_bit(PG_locked, &virt_to_page(pt)->flags); + wake_up(&virt_to_page(pt)->wait); + free_page((unsigned long) pt); + atomic_sub(PAGE_SIZE/AGP_PAGE_SIZE, &agp_bridge.current_memory_agp); +} + +/* End Basic Page Allocation Routines */ + +void agp_enable(u32 mode) +{ + if (agp_bridge.type == NOT_SUPPORTED) return; + agp_bridge.agp_enable(mode); +} + +/* End - Generic Agp routines */ + +#ifdef CONFIG_AGP_I810 +static aper_size_info_fixed intel_i810_sizes[] = +{ + {64, 16384, 4}, + /* The 32M mode still requires a 64k gatt */ + {32, 8192, 4} +}; + +#define AGP_DCACHE_MEMORY 1 +#define AGP_PHYS_MEMORY 2 + +static gatt_mask intel_i810_masks[] = +{ + {I810_PTE_VALID, 0}, + {(I810_PTE_VALID | I810_PTE_LOCAL), AGP_DCACHE_MEMORY}, + {I810_PTE_VALID, 0} +}; + +static struct _intel_i810_private { + struct pci_dev *i810_dev; /* device one */ + volatile u8 *registers; + int num_dcache_entries; +} intel_i810_private; + +static int intel_i810_fetch_size(void) +{ + u32 smram_miscc; + aper_size_info_fixed *values; + + pci_read_config_dword(agp_bridge.dev, I810_SMRAM_MISCC, &smram_miscc); + values = A_SIZE_FIX(agp_bridge.aperture_sizes); + + if ((smram_miscc & I810_GMS) == I810_GMS_DISABLE) { + printk(KERN_WARNING PFX "i810 is disabled\n"); + return 0; + } + if ((smram_miscc & I810_GFX_MEM_WIN_SIZE) == I810_GFX_MEM_WIN_32M) { + agp_bridge.previous_size = + agp_bridge.current_size = (void *) (values + 1); + agp_bridge.aperture_size_idx = 1; + return values[1].size; + } else { + agp_bridge.previous_size = + agp_bridge.current_size = (void *) (values); + agp_bridge.aperture_size_idx = 0; + return values[0].size; + } + + return 0; +} + +static int intel_i810_configure(void) +{ + aper_size_info_fixed *current_size; + u32 temp; + int i; + + current_size = A_SIZE_FIX(agp_bridge.current_size); + + pci_read_config_dword(intel_i810_private.i810_dev, I810_MMADDR, &temp); + temp &= 0xfff80000; + + intel_i810_private.registers = + (volatile u8 *) ioremap(temp, 128 * AGP_PAGE_SIZE); + + if ((INREG32(intel_i810_private.registers, I810_DRAM_CTL) + & I810_DRAM_ROW_0) == I810_DRAM_ROW_0_SDRAM) { + /* This will need to be dynamically assigned */ + printk(KERN_INFO PFX "detected 4MB dedicated video ram.\n"); + intel_i810_private.num_dcache_entries = 1024; + } + pci_read_config_dword(intel_i810_private.i810_dev, I810_GMADDR, &temp); + agp_bridge.gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); + OUTREG32(intel_i810_private.registers, I810_PGETBL_CTL, + agp_bridge.gatt_bus_addr | I810_PGETBL_ENABLED); + CACHE_FLUSH(); + + if (agp_bridge.needs_scratch_page == TRUE) { + for (i = 0; i < current_size->num_entries; i++) { + OUTREG32(intel_i810_private.registers, + I810_PTE_BASE + (i * 4), + agp_bridge.scratch_page); + } + } + return 0; +} + +static void intel_i810_cleanup(void) +{ + OUTREG32(intel_i810_private.registers, I810_PGETBL_CTL, 0); + iounmap((void *) intel_i810_private.registers); +} + +static void intel_i810_tlbflush(agp_memory * mem) +{ + return; +} + +static void intel_i810_agp_enable(u32 mode) +{ + return; +} + +static int intel_i810_insert_entries(agp_memory * mem, off_t pg_start, + int type) +{ + int i, j, num_entries; + void *temp; + + temp = agp_bridge.current_size; + num_entries = A_SIZE_FIX(temp)->num_entries; + + if ((pg_start + mem->page_count) > num_entries) { + return -EINVAL; + } + for (j = pg_start; j < (pg_start + mem->page_count); j++) { + if (!PGE_EMPTY(agp_bridge.gatt_table[j])) { + return -EBUSY; + } + } + + if (type != 0 || mem->type != 0) { + if ((type == AGP_DCACHE_MEMORY) && + (mem->type == AGP_DCACHE_MEMORY)) { + /* special insert */ + CACHE_FLUSH(); + for (i = pg_start; + i < (pg_start + mem->page_count); i++) { + OUTREG32(intel_i810_private.registers, + I810_PTE_BASE + (i * 4), + (i * AGP_PAGE_SIZE) | I810_PTE_LOCAL | + I810_PTE_VALID); + } + CACHE_FLUSH(); + agp_bridge.tlb_flush(mem); + return 0; + } + if((type == AGP_PHYS_MEMORY) && + (mem->type == AGP_PHYS_MEMORY)) { + goto insert; + } + return -EINVAL; + } + +insert: + CACHE_FLUSH(); + for (i = 0, j = pg_start; i < mem->page_count; i++, j++) { + OUTREG32(intel_i810_private.registers, + I810_PTE_BASE + (j * 4), mem->memory[i]); + } + CACHE_FLUSH(); + + agp_bridge.tlb_flush(mem); + return 0; +} + +static int intel_i810_remove_entries(agp_memory * mem, off_t pg_start, + int type) +{ + int i; + + for (i = pg_start; i < (mem->page_count + pg_start); i++) { + OUTREG32(intel_i810_private.registers, + I810_PTE_BASE + (i * 4), + agp_bridge.scratch_page); + } + + CACHE_FLUSH(); + agp_bridge.tlb_flush(mem); + return 0; +} + +static agp_memory *intel_i810_alloc_by_type(size_t pg_count, int type) +{ + agp_memory *new; + + if (type == AGP_DCACHE_MEMORY) { + if (pg_count != intel_i810_private.num_dcache_entries) { + return NULL; + } + new = agp_create_memory(0); + + if (new == NULL) { + return NULL; + } + new->type = AGP_DCACHE_MEMORY; + new->page_count = pg_count; + MOD_INC_USE_COUNT; + return new; + } + if(type == AGP_PHYS_MEMORY) { + /* The I810 requires a physical address to program + * it's mouse pointer into hardware. However the + * Xserver still writes to it through the agp + * aperture + */ + if (pg_count != 1) { + return NULL; + } + new = agp_create_memory(1); + + if (new == NULL) { + return NULL; + } + MOD_INC_USE_COUNT; + new->memory[0] = agp_bridge.agp_alloc_page(); + + if (new->memory[0] == 0) { + /* Free this structure */ + agp_free_memory(new); + return NULL; + } + new->memory[0] = + agp_bridge.mask_memory( + virt_to_phys((void *) new->memory[0]), + type); + new->page_count = 1; + new->type = AGP_PHYS_MEMORY; + new->physical = virt_to_phys((void *) new->memory[0]); + return new; + } + + return NULL; +} + +static void intel_i810_free_by_type(agp_memory * curr) +{ + agp_free_key(curr->key); + if(curr->type == AGP_PHYS_MEMORY) { + agp_bridge.agp_destroy_page((unsigned long) + phys_to_virt(curr->memory[0])); + vfree(curr->memory); + } + kfree(curr); + MOD_DEC_USE_COUNT; +} + +static unsigned long intel_i810_mask_memory(unsigned long addr, int type) +{ + /* Type checking must be done elsewhere */ + return addr | agp_bridge.masks[type].mask; +} + +static int __init intel_i810_setup(struct pci_dev *i810_dev) +{ + intel_i810_private.i810_dev = i810_dev; + + agp_bridge.masks = intel_i810_masks; + agp_bridge.num_of_masks = 2; + agp_bridge.aperture_sizes = (void *) intel_i810_sizes; + agp_bridge.size_type = FIXED_APER_SIZE; + agp_bridge.num_aperture_sizes = 2; + agp_bridge.dev_private_data = (void *) &intel_i810_private; + agp_bridge.needs_scratch_page = TRUE; + agp_bridge.configure = intel_i810_configure; + agp_bridge.fetch_size = intel_i810_fetch_size; + agp_bridge.cleanup = intel_i810_cleanup; + agp_bridge.tlb_flush = intel_i810_tlbflush; + agp_bridge.mask_memory = intel_i810_mask_memory; + agp_bridge.agp_enable = intel_i810_agp_enable; + agp_bridge.cache_flush = global_cache_flush; + agp_bridge.create_gatt_table = agp_generic_create_gatt_table; + agp_bridge.free_gatt_table = agp_generic_free_gatt_table; + agp_bridge.insert_memory = intel_i810_insert_entries; + agp_bridge.remove_memory = intel_i810_remove_entries; + agp_bridge.alloc_by_type = intel_i810_alloc_by_type; + agp_bridge.free_by_type = intel_i810_free_by_type; + agp_bridge.agp_alloc_page = agp_generic_alloc_page; + agp_bridge.agp_destroy_page = agp_generic_destroy_page; + + return 0; +} + +#endif /* CONFIG_AGP_I810 */ + +#ifdef CONFIG_AGP_INTEL + +static int intel_fetch_size(void) +{ + int i; + u16 temp; + aper_size_info_16 *values; + + pci_read_config_word(agp_bridge.dev, INTEL_APSIZE, &temp); + values = A_SIZE_16(agp_bridge.aperture_sizes); + + for (i = 0; i < agp_bridge.num_aperture_sizes; i++) { + if (temp == values[i].size_value) { + agp_bridge.previous_size = + agp_bridge.current_size = (void *) (values + i); + agp_bridge.aperture_size_idx = i; + return values[i].size; + } + } + + return 0; +} + +static void intel_tlbflush(agp_memory * mem) +{ + pci_write_config_dword(agp_bridge.dev, INTEL_AGPCTRL, 0x2200); + pci_write_config_dword(agp_bridge.dev, INTEL_AGPCTRL, 0x2280); +} + +static void intel_cleanup(void) +{ + u16 temp; + aper_size_info_16 *previous_size; + + previous_size = A_SIZE_16(agp_bridge.previous_size); + pci_read_config_word(agp_bridge.dev, INTEL_NBXCFG, &temp); + pci_write_config_word(agp_bridge.dev, INTEL_NBXCFG, temp & ~(1 << 9)); + pci_write_config_word(agp_bridge.dev, INTEL_APSIZE, + previous_size->size_value); +} + +static int intel_configure(void) +{ + u32 temp; + u16 temp2; + aper_size_info_16 *current_size; + + current_size = A_SIZE_16(agp_bridge.current_size); + + /* aperture size */ + pci_write_config_word(agp_bridge.dev, INTEL_APSIZE, + current_size->size_value); + + /* address to map to */ + pci_read_config_dword(agp_bridge.dev, INTEL_APBASE, &temp); + agp_bridge.gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); + + /* attbase - aperture base */ + pci_write_config_dword(agp_bridge.dev, INTEL_ATTBASE, + agp_bridge.gatt_bus_addr); + + /* agpctrl */ + pci_write_config_dword(agp_bridge.dev, INTEL_AGPCTRL, 0x2280); + + /* paccfg/nbxcfg */ + pci_read_config_word(agp_bridge.dev, INTEL_NBXCFG, &temp2); + pci_write_config_word(agp_bridge.dev, INTEL_NBXCFG, + (temp2 & ~(1 << 10)) | (1 << 9)); + /* clear any possible error conditions */ + pci_write_config_byte(agp_bridge.dev, INTEL_ERRSTS + 1, 7); + return 0; +} + +static int intel_840_configure(void) +{ + u32 temp; + u16 temp2; + aper_size_info_16 *current_size; + + current_size = A_SIZE_16(agp_bridge.current_size); + + /* aperture size */ + pci_write_config_byte(agp_bridge.dev, INTEL_APSIZE, + (char)current_size->size_value); + + /* address to map to */ + pci_read_config_dword(agp_bridge.dev, INTEL_APBASE, &temp); + agp_bridge.gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); + + /* attbase - aperture base */ + pci_write_config_dword(agp_bridge.dev, INTEL_ATTBASE, + agp_bridge.gatt_bus_addr); + + /* agpctrl */ + pci_write_config_dword(agp_bridge.dev, INTEL_AGPCTRL, 0x0000); + + /* mcgcfg */ + pci_read_config_word(agp_bridge.dev, INTEL_I840_MCHCFG, &temp2); + pci_write_config_word(agp_bridge.dev, INTEL_I840_MCHCFG, + temp2 | (1 << 9)); + /* clear any possible error conditions */ + pci_write_config_word(agp_bridge.dev, INTEL_I840_ERRSTS, 0xc000); + return 0; +} + +static int intel_850_configure(void) +{ + u32 temp; + u16 temp2; + aper_size_info_16 *current_size; + + current_size = A_SIZE_16(agp_bridge.current_size); + + /* aperture size */ + pci_write_config_byte(agp_bridge.dev, INTEL_APSIZE, + (char)current_size->size_value); + + /* address to map to */ + pci_read_config_dword(agp_bridge.dev, INTEL_APBASE, &temp); + agp_bridge.gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); + + /* attbase - aperture base */ + pci_write_config_dword(agp_bridge.dev, INTEL_ATTBASE, + agp_bridge.gatt_bus_addr); + + /* agpctrl */ + pci_write_config_dword(agp_bridge.dev, INTEL_AGPCTRL, 0x0000); + + /* mcgcfg */ + pci_read_config_word(agp_bridge.dev, INTEL_I850_MCHCFG, &temp2); + pci_write_config_word(agp_bridge.dev, INTEL_I850_MCHCFG, + temp2 | (1 << 9)); + /* clear any possible AGP-related error conditions */ + pci_write_config_word(agp_bridge.dev, INTEL_I850_ERRSTS, 0x001c); + return 0; +} + +static unsigned long intel_mask_memory(unsigned long addr, int type) +{ + /* Memory type is ignored */ + + return addr | agp_bridge.masks[0].mask; +} + + +/* Setup function */ +static gatt_mask intel_generic_masks[] = +{ + {0x00000017, 0} +}; + +static aper_size_info_16 intel_generic_sizes[7] = +{ + {256, 65536, 6, 0}, + {128, 32768, 5, 32}, + {64, 16384, 4, 48}, + {32, 8192, 3, 56}, + {16, 4096, 2, 60}, + {8, 2048, 1, 62}, + {4, 1024, 0, 63} +}; + +static int __init intel_generic_setup (struct pci_dev *pdev) +{ + agp_bridge.masks = intel_generic_masks; + agp_bridge.num_of_masks = 1; + agp_bridge.aperture_sizes = (void *) intel_generic_sizes; + agp_bridge.size_type = U16_APER_SIZE; + agp_bridge.num_aperture_sizes = 7; + agp_bridge.dev_private_data = NULL; + agp_bridge.needs_scratch_page = FALSE; + agp_bridge.configure = intel_configure; + agp_bridge.fetch_size = intel_fetch_size; + agp_bridge.cleanup = intel_cleanup; + agp_bridge.tlb_flush = intel_tlbflush; + agp_bridge.mask_memory = intel_mask_memory; + agp_bridge.agp_enable = agp_generic_agp_enable; + agp_bridge.cache_flush = global_cache_flush; + agp_bridge.create_gatt_table = agp_generic_create_gatt_table; + agp_bridge.free_gatt_table = agp_generic_free_gatt_table; + agp_bridge.insert_memory = agp_generic_insert_memory; + agp_bridge.remove_memory = agp_generic_remove_memory; + agp_bridge.alloc_by_type = agp_generic_alloc_by_type; + agp_bridge.free_by_type = agp_generic_free_by_type; + agp_bridge.agp_alloc_page = agp_generic_alloc_page; + agp_bridge.agp_destroy_page = agp_generic_destroy_page; + + return 0; + + (void) pdev; /* unused */ +} + +static int __init intel_840_setup (struct pci_dev *pdev) +{ + agp_bridge.masks = intel_generic_masks; + agp_bridge.num_of_masks = 1; + agp_bridge.aperture_sizes = (void *) intel_generic_sizes; + agp_bridge.size_type = U16_APER_SIZE; + agp_bridge.num_aperture_sizes = 7; + agp_bridge.dev_private_data = NULL; + agp_bridge.needs_scratch_page = FALSE; + agp_bridge.configure = intel_840_configure; + agp_bridge.fetch_size = intel_fetch_size; + agp_bridge.cleanup = intel_cleanup; + agp_bridge.tlb_flush = intel_tlbflush; + agp_bridge.mask_memory = intel_mask_memory; + agp_bridge.agp_enable = agp_generic_agp_enable; + agp_bridge.cache_flush = global_cache_flush; + agp_bridge.create_gatt_table = agp_generic_create_gatt_table; + agp_bridge.free_gatt_table = agp_generic_free_gatt_table; + agp_bridge.insert_memory = agp_generic_insert_memory; + agp_bridge.remove_memory = agp_generic_remove_memory; + agp_bridge.alloc_by_type = agp_generic_alloc_by_type; + agp_bridge.free_by_type = agp_generic_free_by_type; + agp_bridge.agp_alloc_page = agp_generic_alloc_page; + agp_bridge.agp_destroy_page = agp_generic_destroy_page; + + return 0; + + (void) pdev; /* unused */ +} + +static int __init intel_850_setup (struct pci_dev *pdev) +{ + agp_bridge.masks = intel_generic_masks; + agp_bridge.num_of_masks = 1; + agp_bridge.aperture_sizes = (void *) intel_generic_sizes; + agp_bridge.size_type = U16_APER_SIZE; + agp_bridge.num_aperture_sizes = 7; + agp_bridge.dev_private_data = NULL; + agp_bridge.needs_scratch_page = FALSE; + agp_bridge.configure = intel_850_configure; + agp_bridge.fetch_size = intel_fetch_size; + agp_bridge.cleanup = intel_cleanup; + agp_bridge.tlb_flush = intel_tlbflush; + agp_bridge.mask_memory = intel_mask_memory; + agp_bridge.agp_enable = agp_generic_agp_enable; + agp_bridge.cache_flush = global_cache_flush; + agp_bridge.create_gatt_table = agp_generic_create_gatt_table; + agp_bridge.free_gatt_table = agp_generic_free_gatt_table; + agp_bridge.insert_memory = agp_generic_insert_memory; + agp_bridge.remove_memory = agp_generic_remove_memory; + agp_bridge.alloc_by_type = agp_generic_alloc_by_type; + agp_bridge.free_by_type = agp_generic_free_by_type; + agp_bridge.agp_alloc_page = agp_generic_alloc_page; + agp_bridge.agp_destroy_page = agp_generic_destroy_page; + + return 0; + + (void) pdev; /* unused */ +} + +#endif /* CONFIG_AGP_INTEL */ + +#ifdef CONFIG_AGP_VIA + +static int via_fetch_size(void) +{ + int i; + u8 temp; + aper_size_info_8 *values; + + values = A_SIZE_8(agp_bridge.aperture_sizes); + pci_read_config_byte(agp_bridge.dev, VIA_APSIZE, &temp); + for (i = 0; i < agp_bridge.num_aperture_sizes; i++) { + if (temp == values[i].size_value) { + agp_bridge.previous_size = + agp_bridge.current_size = (void *) (values + i); + agp_bridge.aperture_size_idx = i; + return values[i].size; + } + } + + return 0; +} + +static int via_configure(void) +{ + u32 temp; + aper_size_info_8 *current_size; + + current_size = A_SIZE_8(agp_bridge.current_size); + /* aperture size */ + pci_write_config_byte(agp_bridge.dev, VIA_APSIZE, + current_size->size_value); + /* address to map too */ + pci_read_config_dword(agp_bridge.dev, VIA_APBASE, &temp); + agp_bridge.gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); + + /* GART control register */ + pci_write_config_dword(agp_bridge.dev, VIA_GARTCTRL, 0x0000000f); + + /* attbase - aperture GATT base */ + pci_write_config_dword(agp_bridge.dev, VIA_ATTBASE, + (agp_bridge.gatt_bus_addr & 0xfffff000) | 3); + return 0; +} + +static void via_cleanup(void) +{ + aper_size_info_8 *previous_size; + + previous_size = A_SIZE_8(agp_bridge.previous_size); + pci_write_config_byte(agp_bridge.dev, VIA_APSIZE, + previous_size->size_value); + /* Do not disable by writing 0 to VIA_ATTBASE, it screws things up + * during reinitialization. + */ +} + +static void via_tlbflush(agp_memory * mem) +{ + pci_write_config_dword(agp_bridge.dev, VIA_GARTCTRL, 0x0000008f); + pci_write_config_dword(agp_bridge.dev, VIA_GARTCTRL, 0x0000000f); +} + +static unsigned long via_mask_memory(unsigned long addr, int type) +{ + /* Memory type is ignored */ + + return addr | agp_bridge.masks[0].mask; +} + +static aper_size_info_8 via_generic_sizes[7] = +{ + {256, 65536, 6, 0}, + {128, 32768, 5, 128}, + {64, 16384, 4, 192}, + {32, 8192, 3, 224}, + {16, 4096, 2, 240}, + {8, 2048, 1, 248}, + {4, 1024, 0, 252} +}; + +static gatt_mask via_generic_masks[] = +{ + {0x00000000, 0} +}; + +static int __init via_generic_setup (struct pci_dev *pdev) +{ + agp_bridge.masks = via_generic_masks; + agp_bridge.num_of_masks = 1; + agp_bridge.aperture_sizes = (void *) via_generic_sizes; + agp_bridge.size_type = U8_APER_SIZE; + agp_bridge.num_aperture_sizes = 7; + agp_bridge.dev_private_data = NULL; + agp_bridge.needs_scratch_page = FALSE; + agp_bridge.configure = via_configure; + agp_bridge.fetch_size = via_fetch_size; + agp_bridge.cleanup = via_cleanup; + agp_bridge.tlb_flush = via_tlbflush; + agp_bridge.mask_memory = via_mask_memory; + agp_bridge.agp_enable = agp_generic_agp_enable; + agp_bridge.cache_flush = global_cache_flush; + agp_bridge.create_gatt_table = agp_generic_create_gatt_table; + agp_bridge.free_gatt_table = agp_generic_free_gatt_table; + agp_bridge.insert_memory = agp_generic_insert_memory; + agp_bridge.remove_memory = agp_generic_remove_memory; + agp_bridge.alloc_by_type = agp_generic_alloc_by_type; + agp_bridge.free_by_type = agp_generic_free_by_type; + agp_bridge.agp_alloc_page = agp_generic_alloc_page; + agp_bridge.agp_destroy_page = agp_generic_destroy_page; + + return 0; + + (void) pdev; /* unused */ +} + +#endif /* CONFIG_AGP_VIA */ + +#ifdef CONFIG_AGP_SIS + +static int sis_fetch_size(void) +{ + u8 temp_size; + int i; + aper_size_info_8 *values; + + pci_read_config_byte(agp_bridge.dev, SIS_APSIZE, &temp_size); + values = A_SIZE_8(agp_bridge.aperture_sizes); + for (i = 0; i < agp_bridge.num_aperture_sizes; i++) { + if ((temp_size == values[i].size_value) || + ((temp_size & ~(0x03)) == + (values[i].size_value & ~(0x03)))) { + agp_bridge.previous_size = + agp_bridge.current_size = (void *) (values + i); + + agp_bridge.aperture_size_idx = i; + return values[i].size; + } + } + + return 0; +} + + +static void sis_tlbflush(agp_memory * mem) +{ + pci_write_config_byte(agp_bridge.dev, SIS_TLBFLUSH, 0x02); +} + +static int sis_configure(void) +{ + u32 temp; + aper_size_info_8 *current_size; + + current_size = A_SIZE_8(agp_bridge.current_size); + pci_write_config_byte(agp_bridge.dev, SIS_TLBCNTRL, 0x05); + pci_read_config_dword(agp_bridge.dev, SIS_APBASE, &temp); + agp_bridge.gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); + pci_write_config_dword(agp_bridge.dev, SIS_ATTBASE, + agp_bridge.gatt_bus_addr); + pci_write_config_byte(agp_bridge.dev, SIS_APSIZE, + current_size->size_value); + return 0; +} + +static void sis_cleanup(void) +{ + aper_size_info_8 *previous_size; + + previous_size = A_SIZE_8(agp_bridge.previous_size); + pci_write_config_byte(agp_bridge.dev, SIS_APSIZE, + (previous_size->size_value & ~(0x03))); +} + +static unsigned long sis_mask_memory(unsigned long addr, int type) +{ + /* Memory type is ignored */ + + return addr | agp_bridge.masks[0].mask; +} + +static aper_size_info_8 sis_generic_sizes[7] = +{ + {256, 65536, 6, 99}, + {128, 32768, 5, 83}, + {64, 16384, 4, 67}, + {32, 8192, 3, 51}, + {16, 4096, 2, 35}, + {8, 2048, 1, 19}, + {4, 1024, 0, 3} +}; + +static gatt_mask sis_generic_masks[] = +{ + {0x00000000, 0} +}; + +static int __init sis_generic_setup (struct pci_dev *pdev) +{ + agp_bridge.masks = sis_generic_masks; + agp_bridge.num_of_masks = 1; + agp_bridge.aperture_sizes = (void *) sis_generic_sizes; + agp_bridge.size_type = U8_APER_SIZE; + agp_bridge.num_aperture_sizes = 7; + agp_bridge.dev_private_data = NULL; + agp_bridge.needs_scratch_page = FALSE; + agp_bridge.configure = sis_configure; + agp_bridge.fetch_size = sis_fetch_size; + agp_bridge.cleanup = sis_cleanup; + agp_bridge.tlb_flush = sis_tlbflush; + agp_bridge.mask_memory = sis_mask_memory; + agp_bridge.agp_enable = agp_generic_agp_enable; + agp_bridge.cache_flush = global_cache_flush; + agp_bridge.create_gatt_table = agp_generic_create_gatt_table; + agp_bridge.free_gatt_table = agp_generic_free_gatt_table; + agp_bridge.insert_memory = agp_generic_insert_memory; + agp_bridge.remove_memory = agp_generic_remove_memory; + agp_bridge.alloc_by_type = agp_generic_alloc_by_type; + agp_bridge.free_by_type = agp_generic_free_by_type; + agp_bridge.agp_alloc_page = agp_generic_alloc_page; + agp_bridge.agp_destroy_page = agp_generic_destroy_page; + + return 0; +} + +#endif /* CONFIG_AGP_SIS */ + +#ifdef CONFIG_AGP_AMD + +typedef struct _amd_page_map { + unsigned long *real; + unsigned long *remapped; +} amd_page_map; + +static struct _amd_irongate_private { + volatile u8 *registers; + amd_page_map **gatt_pages; + int num_tables; +} amd_irongate_private; + +static int amd_create_page_map(amd_page_map *page_map) +{ + int i; + + page_map->real = (unsigned long *) __get_free_page(GFP_KERNEL); + if (page_map->real == NULL) { + return -ENOMEM; + } + set_bit(PG_reserved, &virt_to_page(page_map->real)->flags); + CACHE_FLUSH(); + page_map->remapped = ioremap_nocache(virt_to_phys(page_map->real), + PAGE_SIZE); + if (page_map->remapped == NULL) { + clear_bit(PG_reserved, + &virt_to_page(page_map->real)->flags); + free_page((unsigned long) page_map->real); + page_map->real = NULL; + return -ENOMEM; + } + CACHE_FLUSH(); + + for(i = 0; i < AGP_PAGE_SIZE / sizeof(unsigned long); i++) { + page_map->remapped[i] = agp_bridge.scratch_page; + } + + return 0; +} + +static void amd_free_page_map(amd_page_map *page_map) +{ + iounmap(page_map->remapped); + clear_bit(PG_reserved, + &virt_to_page(page_map->real)->flags); + free_page((unsigned long) page_map->real); +} + +static void amd_free_gatt_pages(void) +{ + int i; + amd_page_map **tables; + amd_page_map *entry; + + tables = amd_irongate_private.gatt_pages; + for(i = 0; i < amd_irongate_private.num_tables; i++) { + entry = tables[i]; + if (entry != NULL) { + if (entry->real != NULL && + (i % (PAGE_SIZE/AGP_PAGE_SIZE)) == 0) { + amd_free_page_map(entry); + } + kfree(entry); + } + } + kfree(tables); +} + +static int amd_create_gatt_pages(int nr_tables) +{ + amd_page_map **tables; + amd_page_map *entry; + int retval = 0; + int i; + + tables = kmalloc((nr_tables + 1) * sizeof(amd_page_map *), + GFP_KERNEL); + if (tables == NULL) { + return -ENOMEM; + } + memset(tables, 0, sizeof(amd_page_map *) * (nr_tables + 1)); + for (i = 0; i < nr_tables; i++) { + entry = kmalloc(sizeof(amd_page_map), GFP_KERNEL); + if (entry == NULL) { + retval = -ENOMEM; + break; + } + memset(entry, 0, sizeof(amd_page_map)); + tables[i] = entry; + if ((i % (PAGE_SIZE/AGP_PAGE_SIZE)) == 0) { + retval = amd_create_page_map(entry); + if (retval != 0) break; + } else { + entry->real = tables[i-1]->real + 1024; + entry->remapped = tables[i-1]->remapped + 1024; + } + } + amd_irongate_private.num_tables = nr_tables; + amd_irongate_private.gatt_pages = tables; + + if (retval != 0) amd_free_gatt_pages(); + + return retval; +} + +/* Since we don't need contigious memory we just try + * to get the gatt table once + */ + +#define GET_PAGE_DIR_OFF(addr) (addr >> 22) +#define GET_PAGE_DIR_IDX(addr) (GET_PAGE_DIR_OFF(addr) - \ + GET_PAGE_DIR_OFF(agp_bridge.gart_bus_addr)) +#define GET_GATT_OFF(addr) ((addr & 0x003ff000) >> 12) +#define GET_GATT(addr) (amd_irongate_private.gatt_pages[\ + GET_PAGE_DIR_IDX(addr)]->remapped) + +static int amd_create_gatt_table(void) +{ + aper_size_info_lvl2 *value; + amd_page_map page_dir; + unsigned long addr; + int retval; + u32 temp; + int i; + + value = A_SIZE_LVL2(agp_bridge.current_size); + retval = amd_create_page_map(&page_dir); + if (retval != 0) { + return retval; + } + + retval = amd_create_gatt_pages(value->num_entries / 1024); + if (retval != 0) { + amd_free_page_map(&page_dir); + return retval; + } + + agp_bridge.gatt_table_real = page_dir.real; + agp_bridge.gatt_table = page_dir.remapped; + agp_bridge.gatt_bus_addr = virt_to_bus(page_dir.real); + + /* Get the address for the gart region. + * This is a bus address even on the alpha, b/c its + * used to program the agp master not the cpu + */ + + pci_read_config_dword(agp_bridge.dev, AMD_APBASE, &temp); + addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); + agp_bridge.gart_bus_addr = addr; + + /* Calculate the agp offset */ + for(i = 0; i < value->num_entries / 1024; i++, addr += 0x00400000) { + page_dir.remapped[GET_PAGE_DIR_OFF(addr)] = + virt_to_bus(amd_irongate_private.gatt_pages[i]->real); + page_dir.remapped[GET_PAGE_DIR_OFF(addr)] |= 0x00000001; + } + + return 0; +} + +static int amd_free_gatt_table(void) +{ + amd_page_map page_dir; + + page_dir.real = agp_bridge.gatt_table_real; + page_dir.remapped = agp_bridge.gatt_table; + + amd_free_gatt_pages(); + amd_free_page_map(&page_dir); + return 0; +} + +static int amd_irongate_fetch_size(void) +{ + int i; + u32 temp; + aper_size_info_lvl2 *values; + + pci_read_config_dword(agp_bridge.dev, AMD_APSIZE, &temp); + temp = (temp & 0x0000000e); + values = A_SIZE_LVL2(agp_bridge.aperture_sizes); + for (i = 0; i < agp_bridge.num_aperture_sizes; i++) { + if (temp == values[i].size_value) { + agp_bridge.previous_size = + agp_bridge.current_size = (void *) (values + i); + + agp_bridge.aperture_size_idx = i; + return values[i].size; + } + } + + return 0; +} + +static int amd_irongate_configure(void) +{ + aper_size_info_lvl2 *current_size; + u32 temp; + u16 enable_reg; + + current_size = A_SIZE_LVL2(agp_bridge.current_size); + + /* Get the memory mapped registers */ + pci_read_config_dword(agp_bridge.dev, AMD_MMBASE, &temp); + temp = (temp & PCI_BASE_ADDRESS_MEM_MASK); + amd_irongate_private.registers = (volatile u8 *) ioremap(temp, AGP_PAGE_SIZE); + + /* Write out the address of the gatt table */ + OUTREG32(amd_irongate_private.registers, AMD_ATTBASE, + agp_bridge.gatt_bus_addr); + + /* Write the Sync register */ + pci_write_config_byte(agp_bridge.dev, AMD_MODECNTL, 0x80); + + /* Set indexing mode */ + pci_write_config_byte(agp_bridge.dev, AMD_MODECNTL2, 0x00); + + /* Write the enable register */ + enable_reg = INREG16(amd_irongate_private.registers, AMD_GARTENABLE); + enable_reg = (enable_reg | 0x0004); + OUTREG16(amd_irongate_private.registers, AMD_GARTENABLE, enable_reg); + + /* Write out the size register */ + pci_read_config_dword(agp_bridge.dev, AMD_APSIZE, &temp); + temp = (((temp & ~(0x0000000e)) | current_size->size_value) + | 0x00000001); + pci_write_config_dword(agp_bridge.dev, AMD_APSIZE, temp); + + /* Flush the tlb */ + OUTREG32(amd_irongate_private.registers, AMD_TLBFLUSH, 0x00000001); + + return 0; +} + +static void amd_irongate_cleanup(void) +{ + aper_size_info_lvl2 *previous_size; + u32 temp; + u16 enable_reg; + + previous_size = A_SIZE_LVL2(agp_bridge.previous_size); + + enable_reg = INREG16(amd_irongate_private.registers, AMD_GARTENABLE); + enable_reg = (enable_reg & ~(0x0004)); + OUTREG16(amd_irongate_private.registers, AMD_GARTENABLE, enable_reg); + + /* Write back the previous size and disable gart translation */ + pci_read_config_dword(agp_bridge.dev, AMD_APSIZE, &temp); + temp = ((temp & ~(0x0000000f)) | previous_size->size_value); + pci_write_config_dword(agp_bridge.dev, AMD_APSIZE, temp); + iounmap((void *) amd_irongate_private.registers); +} + +/* + * This routine could be implemented by taking the addresses + * written to the GATT, and flushing them individually. However + * currently it just flushes the whole table. Which is probably + * more efficent, since agp_memory blocks can be a large number of + * entries. + */ + +static void amd_irongate_tlbflush(agp_memory * temp) +{ + OUTREG32(amd_irongate_private.registers, AMD_TLBFLUSH, 0x00000001); +} + +static unsigned long amd_irongate_mask_memory(unsigned long addr, int type) +{ + /* Only type 0 is supported by the irongate */ + + return addr | agp_bridge.masks[0].mask; +} + +static int amd_insert_memory(agp_memory * mem, + off_t pg_start, int type) +{ + int i, j, num_entries; + unsigned long *cur_gatt; + unsigned long addr; + + num_entries = A_SIZE_LVL2(agp_bridge.current_size)->num_entries; + + if (type != 0 || mem->type != 0) { + return -EINVAL; + } + if ((pg_start + mem->page_count) > num_entries) { + return -EINVAL; + } + + j = pg_start; + while (j < (pg_start + mem->page_count)) { + addr = (j * AGP_PAGE_SIZE) + agp_bridge.gart_bus_addr; + cur_gatt = GET_GATT(addr); + if (!PGE_EMPTY(cur_gatt[GET_GATT_OFF(addr)])) { + return -EBUSY; + } + j++; + } + + if (mem->is_flushed == FALSE) { + CACHE_FLUSH(); + mem->is_flushed = TRUE; + } + + for (i = 0, j = pg_start; i < mem->page_count; i++, j++) { + addr = (j * AGP_PAGE_SIZE) + agp_bridge.gart_bus_addr; + cur_gatt = GET_GATT(addr); + cur_gatt[GET_GATT_OFF(addr)] = mem->memory[i]; + } + agp_bridge.tlb_flush(mem); + return 0; +} + +static int amd_remove_memory(agp_memory * mem, off_t pg_start, + int type) +{ + int i; + unsigned long *cur_gatt; + unsigned long addr; + + if (type != 0 || mem->type != 0) { + return -EINVAL; + } + for (i = pg_start; i < (mem->page_count + pg_start); i++) { + addr = (i * AGP_PAGE_SIZE) + agp_bridge.gart_bus_addr; + cur_gatt = GET_GATT(addr); + cur_gatt[GET_GATT_OFF(addr)] = + (unsigned long) agp_bridge.scratch_page; + } + + agp_bridge.tlb_flush(mem); + return 0; +} + +static aper_size_info_lvl2 amd_irongate_sizes[7] = +{ + {2048, 524288, 0x0000000c}, + {1024, 262144, 0x0000000a}, + {512, 131072, 0x00000008}, + {256, 65536, 0x00000006}, + {128, 32768, 0x00000004}, + {64, 16384, 0x00000002}, + {32, 8192, 0x00000000} +}; + +static gatt_mask amd_irongate_masks[] = +{ + {0x00000001, 0} +}; + +static int __init amd_irongate_setup (struct pci_dev *pdev) +{ + agp_bridge.masks = amd_irongate_masks; + agp_bridge.num_of_masks = 1; + agp_bridge.aperture_sizes = (void *) amd_irongate_sizes; + agp_bridge.size_type = LVL2_APER_SIZE; + agp_bridge.num_aperture_sizes = 7; + agp_bridge.dev_private_data = (void *) &amd_irongate_private; + agp_bridge.needs_scratch_page = FALSE; + agp_bridge.configure = amd_irongate_configure; + agp_bridge.fetch_size = amd_irongate_fetch_size; + agp_bridge.cleanup = amd_irongate_cleanup; + agp_bridge.tlb_flush = amd_irongate_tlbflush; + agp_bridge.mask_memory = amd_irongate_mask_memory; + agp_bridge.agp_enable = agp_generic_agp_enable; + agp_bridge.cache_flush = global_cache_flush; + agp_bridge.create_gatt_table = amd_create_gatt_table; + agp_bridge.free_gatt_table = amd_free_gatt_table; + agp_bridge.insert_memory = amd_insert_memory; + agp_bridge.remove_memory = amd_remove_memory; + agp_bridge.alloc_by_type = agp_generic_alloc_by_type; + agp_bridge.free_by_type = agp_generic_free_by_type; + agp_bridge.agp_alloc_page = agp_generic_alloc_page; + agp_bridge.agp_destroy_page = agp_generic_destroy_page; + + return 0; + + (void) pdev; /* unused */ +} + +#endif /* CONFIG_AGP_AMD */ + +#ifdef CONFIG_AGP_ALI + +static int ali_fetch_size(void) +{ + int i; + u32 temp; + aper_size_info_32 *values; + + pci_read_config_dword(agp_bridge.dev, ALI_ATTBASE, &temp); + temp &= ~(0xfffffff0); + values = A_SIZE_32(agp_bridge.aperture_sizes); + + for (i = 0; i < agp_bridge.num_aperture_sizes; i++) { + if (temp == values[i].size_value) { + agp_bridge.previous_size = + agp_bridge.current_size = (void *) (values + i); + agp_bridge.aperture_size_idx = i; + return values[i].size; + } + } + + return 0; +} + +static void ali_tlbflush(agp_memory * mem) +{ + u32 temp; + + pci_read_config_dword(agp_bridge.dev, ALI_TLBCTRL, &temp); +// clear tag + pci_write_config_dword(agp_bridge.dev, ALI_TAGCTRL, + ((temp & 0xfffffff0) | 0x00000001|0x00000002)); +} + +static void ali_cleanup(void) +{ + aper_size_info_32 *previous_size; + u32 temp; + + previous_size = A_SIZE_32(agp_bridge.previous_size); + + pci_read_config_dword(agp_bridge.dev, ALI_TLBCTRL, &temp); +// clear tag + pci_write_config_dword(agp_bridge.dev, ALI_TAGCTRL, + ((temp & 0xffffff00) | 0x00000001|0x00000002)); + + pci_read_config_dword(agp_bridge.dev, ALI_ATTBASE, &temp); + pci_write_config_dword(agp_bridge.dev, ALI_ATTBASE, + ((temp & 0x00000ff0) | previous_size->size_value)); +} + +static int ali_configure(void) +{ + u32 temp; + aper_size_info_32 *current_size; + + current_size = A_SIZE_32(agp_bridge.current_size); + + /* aperture size and gatt addr */ + pci_read_config_dword(agp_bridge.dev, ALI_ATTBASE, &temp); + temp = (((temp & 0x00000ff0) | (agp_bridge.gatt_bus_addr & 0xfffff000)) + | (current_size->size_value & 0xf)); + pci_write_config_dword(agp_bridge.dev, ALI_ATTBASE, temp); + + /* tlb control */ + + /* + * Question: Jeff, ALi's patch deletes this: + * + * pci_read_config_dword(agp_bridge.dev, ALI_TLBCTRL, &temp); + * pci_write_config_dword(agp_bridge.dev, ALI_TLBCTRL, + * ((temp & 0xffffff00) | 0x00000010)); + * + * and replaces it with the following, which seems to duplicate the + * next couple of lines below it. I suspect this was an oversight, + * but you might want to check up on this? + */ + + pci_read_config_dword(agp_bridge.dev, ALI_APBASE, &temp); + agp_bridge.gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); + + /* address to map to */ + pci_read_config_dword(agp_bridge.dev, ALI_APBASE, &temp); + agp_bridge.gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); + +#if 0 + if (agp_bridge.type == ALI_M1541) { + u32 nlvm_addr = 0; + + switch (current_size->size_value) { + case 0: break; + case 1: nlvm_addr = 0x100000;break; + case 2: nlvm_addr = 0x200000;break; + case 3: nlvm_addr = 0x400000;break; + case 4: nlvm_addr = 0x800000;break; + case 6: nlvm_addr = 0x1000000;break; + case 7: nlvm_addr = 0x2000000;break; + case 8: nlvm_addr = 0x4000000;break; + case 9: nlvm_addr = 0x8000000;break; + case 10: nlvm_addr = 0x10000000;break; + default: break; + } + nlvm_addr--; + nlvm_addr&=0xfff00000; + + nlvm_addr+= agp_bridge.gart_bus_addr; + nlvm_addr|=(agp_bridge.gart_bus_addr>>12); + printk(KERN_INFO PFX "nlvm top &base = %8x\n",nlvm_addr); + } +#endif + + pci_read_config_dword(agp_bridge.dev, ALI_TLBCTRL, &temp); + temp &= 0xffffff7f; //enable TLB + pci_write_config_dword(agp_bridge.dev, ALI_TLBCTRL, temp); + + return 0; +} + +static unsigned long ali_mask_memory(unsigned long addr, int type) +{ + /* Memory type is ignored */ + + return addr | agp_bridge.masks[0].mask; +} + +static void ali_cache_flush(void) +{ + global_cache_flush(); + + if (agp_bridge.type == ALI_M1541) { + int i, page_count; + u32 temp; + + page_count = 1 << A_SIZE_32(agp_bridge.current_size)->page_order; + for (i = 0; i < AGP_PAGE_SIZE * page_count; i += AGP_PAGE_SIZE) { + pci_read_config_dword(agp_bridge.dev, ALI_CACHE_FLUSH_CTRL, &temp); + pci_write_config_dword(agp_bridge.dev, ALI_CACHE_FLUSH_CTRL, + (((temp & ALI_CACHE_FLUSH_ADDR_MASK) | + (agp_bridge.gatt_bus_addr + i)) | + ALI_CACHE_FLUSH_EN)); + } + } +} + +static unsigned long ali_alloc_page(void) +{ + void *pt; + u32 temp; + int i; + + pt = (void *) __get_free_page(GFP_KERNEL); + if (pt == NULL) + return 0; + + atomic_inc(&virt_to_page(pt)->count); + set_bit(PG_locked, &virt_to_page(pt)->flags); + atomic_add(PAGE_SIZE/AGP_PAGE_SIZE, &agp_bridge.current_memory_agp); + + global_cache_flush(); + + if (agp_bridge.type == ALI_M1541) { + for (i = 0; i < PAGE_SIZE/AGP_PAGE_SIZE; i++) { + pci_read_config_dword(agp_bridge.dev, ALI_CACHE_FLUSH_CTRL, &temp); + pci_write_config_dword(agp_bridge.dev, ALI_CACHE_FLUSH_CTRL, + (((temp & ALI_CACHE_FLUSH_ADDR_MASK) | + virt_to_phys(pt + i*AGP_PAGE_SIZE)) | + ALI_CACHE_FLUSH_EN )); + } + } + return (unsigned long) pt; +} + +static void ali_destroy_page(unsigned long page) +{ + void *pt = (void *) page; + u32 temp; + int i; + + if (pt == NULL) + return; + + global_cache_flush(); + + if (agp_bridge.type == ALI_M1541) { + for (i = 0; i < PAGE_SIZE/AGP_PAGE_SIZE; i++) { + pci_read_config_dword(agp_bridge.dev, ALI_CACHE_FLUSH_CTRL, &temp); + pci_write_config_dword(agp_bridge.dev, ALI_CACHE_FLUSH_CTRL, + (((temp & ALI_CACHE_FLUSH_ADDR_MASK) | + virt_to_phys(pt + i*AGP_PAGE_SIZE)) | + ALI_CACHE_FLUSH_EN)); + } + } + + atomic_dec(&virt_to_page(pt)->count); + clear_bit(PG_locked, &virt_to_page(pt)->flags); + wake_up(&virt_to_page(pt)->wait); + free_page((unsigned long) pt); + atomic_sub(PAGE_SIZE/AGP_PAGE_SIZE, &agp_bridge.current_memory_agp); +} + +/* Setup function */ +static gatt_mask ali_generic_masks[] = +{ + {0x00000000, 0} +}; + +static aper_size_info_32 ali_generic_sizes[7] = +{ + {256, 65536, 6, 10}, + {128, 32768, 5, 9}, + {64, 16384, 4, 8}, + {32, 8192, 3, 7}, + {16, 4096, 2, 6}, + {8, 2048, 1, 4}, + {4, 1024, 0, 3} +}; + +static int __init ali_generic_setup (struct pci_dev *pdev) +{ + agp_bridge.masks = ali_generic_masks; + agp_bridge.num_of_masks = 1; + agp_bridge.aperture_sizes = (void *) ali_generic_sizes; + agp_bridge.size_type = U32_APER_SIZE; + agp_bridge.num_aperture_sizes = 7; + agp_bridge.dev_private_data = NULL; + agp_bridge.needs_scratch_page = FALSE; + agp_bridge.configure = ali_configure; + agp_bridge.fetch_size = ali_fetch_size; + agp_bridge.cleanup = ali_cleanup; + agp_bridge.tlb_flush = ali_tlbflush; + agp_bridge.mask_memory = ali_mask_memory; + agp_bridge.agp_enable = agp_generic_agp_enable; + agp_bridge.cache_flush = ali_cache_flush; + agp_bridge.create_gatt_table = agp_generic_create_gatt_table; + agp_bridge.free_gatt_table = agp_generic_free_gatt_table; + agp_bridge.insert_memory = agp_generic_insert_memory; + agp_bridge.remove_memory = agp_generic_remove_memory; + agp_bridge.alloc_by_type = agp_generic_alloc_by_type; + agp_bridge.free_by_type = agp_generic_free_by_type; + agp_bridge.agp_alloc_page = ali_alloc_page; + agp_bridge.agp_destroy_page = ali_destroy_page; + + return 0; + + (void) pdev; /* unused */ +} + +#endif /* CONFIG_AGP_ALI */ + +#ifdef CONFIG_AGP_SWORKS +typedef struct _serverworks_page_map { + unsigned long *real; + unsigned long *remapped; +} serverworks_page_map; + +static struct _serverworks_private { + struct pci_dev *svrwrks_dev; /* device one */ + volatile u8 *registers; + serverworks_page_map **gatt_pages; + int num_tables; + serverworks_page_map scratch_dir; + + int gart_addr_ofs; + int mm_addr_ofs; +} serverworks_private; + +static int serverworks_create_page_map(serverworks_page_map *page_map) +{ + int i; + + page_map->real = (unsigned long *) __get_free_page(GFP_KERNEL); + if (page_map->real == NULL) { + return -ENOMEM; + } + set_bit(PG_reserved, &virt_to_page(page_map->real)->flags); + CACHE_FLUSH(); + page_map->remapped = ioremap_nocache(virt_to_phys(page_map->real), + PAGE_SIZE); + if (page_map->remapped == NULL) { + clear_bit(PG_reserved, + &virt_to_page(page_map->real)->flags); + free_page((unsigned long) page_map->real); + page_map->real = NULL; + return -ENOMEM; + } + CACHE_FLUSH(); + + for(i = 0; i < AGP_PAGE_SIZE / sizeof(unsigned long); i++) { + page_map->remapped[i] = agp_bridge.scratch_page; + } + + return 0; +} + +static void serverworks_free_page_map(serverworks_page_map *page_map) +{ + iounmap(page_map->remapped); + clear_bit(PG_reserved, + &virt_to_page(page_map->real)->flags); + free_page((unsigned long) page_map->real); +} + +static void serverworks_free_gatt_pages(void) +{ + int i; + serverworks_page_map **tables; + serverworks_page_map *entry; + + tables = serverworks_private.gatt_pages; + for(i = 0; i < serverworks_private.num_tables; i++) { + entry = tables[i]; + if (entry != NULL) { + if (entry->real != NULL && + (i % (PAGE_SIZE/AGP_PAGE_SIZE)) == 0) { + serverworks_free_page_map(entry); + } + kfree(entry); + } + } + kfree(tables); +} + +static int serverworks_create_gatt_pages(int nr_tables) +{ + serverworks_page_map **tables; + serverworks_page_map *entry; + int retval = 0; + int i; + + tables = kmalloc((nr_tables + 1) * sizeof(serverworks_page_map *), + GFP_KERNEL); + if (tables == NULL) { + return -ENOMEM; + } + memset(tables, 0, sizeof(serverworks_page_map *) * (nr_tables + 1)); + for (i = 0; i < nr_tables; i++) { + entry = kmalloc(sizeof(serverworks_page_map), GFP_KERNEL); + if (entry == NULL) { + retval = -ENOMEM; + break; + } + memset(entry, 0, sizeof(serverworks_page_map)); + tables[i] = entry; + if ((i % (PAGE_SIZE/AGP_PAGE_SIZE)) == 0) { + retval = serverworks_create_page_map(entry); + if (retval != 0) break; + } else { + entry->real = tables[i-1]->real + 1024; + entry->remapped = tables[i-1]->remapped + 1024; + } + } + serverworks_private.num_tables = nr_tables; + serverworks_private.gatt_pages = tables; + + if (retval != 0) serverworks_free_gatt_pages(); + + return retval; +} + +#define SVRWRKS_GET_GATT(addr) (serverworks_private.gatt_pages[\ + GET_PAGE_DIR_IDX(addr)]->remapped) + +#ifndef GET_PAGE_DIR_OFF +#define GET_PAGE_DIR_OFF(addr) (addr >> 22) +#endif + +#ifndef GET_PAGE_DIR_IDX +#define GET_PAGE_DIR_IDX(addr) (GET_PAGE_DIR_OFF(addr) - \ + GET_PAGE_DIR_OFF(agp_bridge.gart_bus_addr)) +#endif + +#ifndef GET_GATT_OFF +#define GET_GATT_OFF(addr) ((addr & 0x003ff000) >> 12) +#endif + +static int serverworks_create_gatt_table(void) +{ + aper_size_info_lvl2 *value; + serverworks_page_map page_dir; + int retval; + u32 temp; + int i; + + value = A_SIZE_LVL2(agp_bridge.current_size); + retval = serverworks_create_page_map(&page_dir); + if (retval != 0) { + return retval; + } + retval = serverworks_create_page_map(&serverworks_private.scratch_dir); + if (retval != 0) { + serverworks_free_page_map(&serverworks_private.scratch_dir); + serverworks_free_page_map(&page_dir); + return retval; + } + /* Create a fake scratch directory */ + for(i = 0; i < AGP_PAGE_SIZE / sizeof(unsigned long); i++) { + serverworks_private.scratch_dir.remapped[i] = (unsigned long) agp_bridge.scratch_page; + page_dir.remapped[i] = + virt_to_bus(serverworks_private.scratch_dir.real); + page_dir.remapped[i] |= 0x00000001; + } + + retval = serverworks_create_gatt_pages(value->num_entries / 1024); + if (retval != 0) { + serverworks_free_page_map(&page_dir); + serverworks_free_page_map(&serverworks_private.scratch_dir); + return retval; + } + + agp_bridge.gatt_table_real = page_dir.real; + agp_bridge.gatt_table = page_dir.remapped; + agp_bridge.gatt_bus_addr = virt_to_bus(page_dir.real); + + /* Get the address for the gart region. + * This is a bus address even on the alpha, b/c its + * used to program the agp master not the cpu + */ + + pci_read_config_dword(agp_bridge.dev, + serverworks_private.gart_addr_ofs, + &temp); + agp_bridge.gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK); + + /* Calculate the agp offset */ + + for(i = 0; i < value->num_entries / 1024; i++) { + page_dir.remapped[i] = + virt_to_bus(serverworks_private.gatt_pages[i]->real); + page_dir.remapped[i] |= 0x00000001; + } + + return 0; +} + +static int serverworks_free_gatt_table(void) +{ + serverworks_page_map page_dir; + + page_dir.real = agp_bridge.gatt_table_real; + page_dir.remapped = agp_bridge.gatt_table; + + serverworks_free_gatt_pages(); + serverworks_free_page_map(&serverworks_private.scratch_dir); + serverworks_free_page_map(&page_dir); + serverworks_free_page_map(&serverworks_private.scratch_dir); + return 0; +} + +static int serverworks_fetch_size(void) +{ + int i; + u32 temp; + u32 temp2; + aper_size_info_lvl2 *values; + + values = A_SIZE_LVL2(agp_bridge.aperture_sizes); + pci_read_config_dword(agp_bridge.dev, + serverworks_private.gart_addr_ofs, + &temp); + pci_write_config_dword(agp_bridge.dev, + serverworks_private.gart_addr_ofs, + 0xfe000000); + pci_read_config_dword(agp_bridge.dev, + serverworks_private.gart_addr_ofs, + &temp2); + pci_write_config_dword(agp_bridge.dev, + serverworks_private.gart_addr_ofs, + temp); + temp2 &= SVWRKS_SIZE_MASK; + + for (i = 0; i < agp_bridge.num_aperture_sizes; i++) { + if (temp2 == values[i].size_value) { + agp_bridge.previous_size = + agp_bridge.current_size = (void *) (values + i); + + agp_bridge.aperture_size_idx = i; + return values[i].size; + } + } + + return 0; +} + +static int serverworks_configure(void) +{ + aper_size_info_lvl2 *current_size; + u32 temp; + u8 enable_reg; + u8 cap_ptr; + u32 cap_id; + u16 cap_reg; + + current_size = A_SIZE_LVL2(agp_bridge.current_size); + + /* Get the memory mapped registers */ + pci_read_config_dword(agp_bridge.dev, + serverworks_private.mm_addr_ofs, + &temp); + temp = (temp & PCI_BASE_ADDRESS_MEM_MASK); + serverworks_private.registers = (volatile u8 *) ioremap(temp, 4096); + + OUTREG8(serverworks_private.registers, SVWRKS_GART_CACHE, 0x0a); + + OUTREG32(serverworks_private.registers, SVWRKS_GATTBASE, + agp_bridge.gatt_bus_addr); + + cap_reg = INREG16(serverworks_private.registers, SVWRKS_COMMAND); + cap_reg &= ~0x0007; + cap_reg |= 0x4; + OUTREG16(serverworks_private.registers, SVWRKS_COMMAND, cap_reg); + + pci_read_config_byte(serverworks_private.svrwrks_dev, + SVWRKS_AGP_ENABLE, &enable_reg); + enable_reg |= 0x1; /* Agp Enable bit */ + pci_write_config_byte(serverworks_private.svrwrks_dev, + SVWRKS_AGP_ENABLE, enable_reg); + agp_bridge.tlb_flush(NULL); + + pci_read_config_byte(serverworks_private.svrwrks_dev, 0x34, &cap_ptr); + if (cap_ptr != 0x00) { + do { + pci_read_config_dword(serverworks_private.svrwrks_dev, + cap_ptr, &cap_id); + + if ((cap_id & 0xff) != 0x02) + cap_ptr = (cap_id >> 8) & 0xff; + } + while (((cap_id & 0xff) != 0x02) && (cap_ptr != 0x00)); + } + agp_bridge.capndx = cap_ptr; + + /* Fill in the mode register */ + pci_read_config_dword(serverworks_private.svrwrks_dev, + agp_bridge.capndx + 4, + &agp_bridge.mode); + + pci_read_config_byte(agp_bridge.dev, + SVWRKS_CACHING, + &enable_reg); + enable_reg &= ~0x3; + pci_write_config_byte(agp_bridge.dev, + SVWRKS_CACHING, + enable_reg); + + pci_read_config_byte(agp_bridge.dev, + SVWRKS_FEATURE, + &enable_reg); + enable_reg |= (1<<6); + pci_write_config_byte(agp_bridge.dev, + SVWRKS_FEATURE, + enable_reg); + + return 0; +} + +static void serverworks_cleanup(void) +{ + iounmap((void *) serverworks_private.registers); +} + +/* + * This routine could be implemented by taking the addresses + * written to the GATT, and flushing them individually. However + * currently it just flushes the whole table. Which is probably + * more efficent, since agp_memory blocks can be a large number of + * entries. + */ + +static void serverworks_tlbflush(agp_memory * temp) +{ + unsigned long end; + + OUTREG8(serverworks_private.registers, SVWRKS_POSTFLUSH, 0x01); + end = jiffies + 3*HZ; + while(INREG8(serverworks_private.registers, + SVWRKS_POSTFLUSH) == 0x01) { + if((signed)(end - jiffies) <= 0) { + printk(KERN_ERR "Posted write buffer flush took more" + "then 3 seconds\n"); + } + } + OUTREG32(serverworks_private.registers, SVWRKS_DIRFLUSH, 0x00000001); + end = jiffies + 3*HZ; + while(INREG32(serverworks_private.registers, + SVWRKS_DIRFLUSH) == 0x00000001) { + if((signed)(end - jiffies) <= 0) { + printk(KERN_ERR "TLB flush took more" + "then 3 seconds\n"); + } + } +} + +static unsigned long serverworks_mask_memory(unsigned long addr, int type) +{ + /* Only type 0 is supported by the serverworks chipsets */ + + return addr | agp_bridge.masks[0].mask; +} + +static int serverworks_insert_memory(agp_memory * mem, + off_t pg_start, int type) +{ + int i, j, num_entries; + unsigned long *cur_gatt; + unsigned long addr; + + num_entries = A_SIZE_LVL2(agp_bridge.current_size)->num_entries; + + if (type != 0 || mem->type != 0) { + return -EINVAL; + } + if ((pg_start + mem->page_count) > num_entries) { + return -EINVAL; + } + + j = pg_start; + while (j < (pg_start + mem->page_count)) { + addr = (j * AGP_PAGE_SIZE) + agp_bridge.gart_bus_addr; + cur_gatt = SVRWRKS_GET_GATT(addr); + if (!PGE_EMPTY(cur_gatt[GET_GATT_OFF(addr)])) { + return -EBUSY; + } + j++; + } + + if (mem->is_flushed == FALSE) { + CACHE_FLUSH(); + mem->is_flushed = TRUE; + } + + for (i = 0, j = pg_start; i < mem->page_count; i++, j++) { + addr = (j * AGP_PAGE_SIZE) + agp_bridge.gart_bus_addr; + cur_gatt = SVRWRKS_GET_GATT(addr); + cur_gatt[GET_GATT_OFF(addr)] = mem->memory[i]; + } + agp_bridge.tlb_flush(mem); + return 0; +} + +static int serverworks_remove_memory(agp_memory * mem, off_t pg_start, + int type) +{ + int i; + unsigned long *cur_gatt; + unsigned long addr; + + if (type != 0 || mem->type != 0) { + return -EINVAL; + } + + CACHE_FLUSH(); + agp_bridge.tlb_flush(mem); + + for (i = pg_start; i < (mem->page_count + pg_start); i++) { + addr = (i * AGP_PAGE_SIZE) + agp_bridge.gart_bus_addr; + cur_gatt = SVRWRKS_GET_GATT(addr); + cur_gatt[GET_GATT_OFF(addr)] = + (unsigned long) agp_bridge.scratch_page; + } + + agp_bridge.tlb_flush(mem); + return 0; +} + +static gatt_mask serverworks_masks[] = +{ + {0x00000001, 0} +}; + +static aper_size_info_lvl2 serverworks_sizes[7] = +{ + {2048, 524288, 0x80000000}, + {1024, 262144, 0xc0000000}, + {512, 131072, 0xe0000000}, + {256, 65536, 0xf0000000}, + {128, 32768, 0xf8000000}, + {64, 16384, 0xfc000000}, + {32, 8192, 0xfe000000} +}; + +static void serverworks_agp_enable(u32 mode) +{ + struct pci_dev *device = NULL; + u32 command, scratch, cap_id; + u8 cap_ptr; + + pci_read_config_dword(serverworks_private.svrwrks_dev, + agp_bridge.capndx + 4, + &command); + + /* + * PASS1: go throu all devices that claim to be + * AGP devices and collect their data. + */ + + while ((device = pci_find_class(PCI_CLASS_DISPLAY_VGA << 8, + device)) != NULL) { + pci_read_config_dword(device, 0x04, &scratch); + + if (!(scratch & 0x00100000)) + continue; + + pci_read_config_byte(device, 0x34, &cap_ptr); + + if (cap_ptr != 0x00) { + do { + pci_read_config_dword(device, + cap_ptr, &cap_id); + + if ((cap_id & 0xff) != 0x02) + cap_ptr = (cap_id >> 8) & 0xff; + } + while (((cap_id & 0xff) != 0x02) && (cap_ptr != 0x00)); + } + if (cap_ptr != 0x00) { + /* + * Ok, here we have a AGP device. Disable impossible + * settings, and adjust the readqueue to the minimum. + */ + + pci_read_config_dword(device, cap_ptr + 4, &scratch); + + /* adjust RQ depth */ + command = + ((command & ~0xff000000) | + min(u32, (mode & 0xff000000), + min(u32, (command & 0xff000000), + (scratch & 0xff000000)))); + + /* disable SBA if it's not supported */ + if (!((command & 0x00000200) && + (scratch & 0x00000200) && + (mode & 0x00000200))) + command &= ~0x00000200; + + /* disable FW */ + command &= ~0x00000010; + + command &= ~0x00000008; + + if (!((command & 4) && + (scratch & 4) && + (mode & 4))) + command &= ~0x00000004; + + if (!((command & 2) && + (scratch & 2) && + (mode & 2))) + command &= ~0x00000002; + + if (!((command & 1) && + (scratch & 1) && + (mode & 1))) + command &= ~0x00000001; + } + } + /* + * PASS2: Figure out the 4X/2X/1X setting and enable the + * target (our motherboard chipset). + */ + + if (command & 4) { + command &= ~3; /* 4X */ + } + if (command & 2) { + command &= ~5; /* 2X */ + } + if (command & 1) { + command &= ~6; /* 1X */ + } + command |= 0x00000100; + + pci_write_config_dword(serverworks_private.svrwrks_dev, + agp_bridge.capndx + 8, + command); + + /* + * PASS3: Go throu all AGP devices and update the + * command registers. + */ + + while ((device = pci_find_class(PCI_CLASS_DISPLAY_VGA << 8, + device)) != NULL) { + pci_read_config_dword(device, 0x04, &scratch); + + if (!(scratch & 0x00100000)) + continue; + + pci_read_config_byte(device, 0x34, &cap_ptr); + + if (cap_ptr != 0x00) { + do { + pci_read_config_dword(device, + cap_ptr, &cap_id); + + if ((cap_id & 0xff) != 0x02) + cap_ptr = (cap_id >> 8) & 0xff; + } + while (((cap_id & 0xff) != 0x02) && (cap_ptr != 0x00)); + } + if (cap_ptr != 0x00) + pci_write_config_dword(device, cap_ptr + 8, command); + } +} + +static int __init serverworks_setup (struct pci_dev *pdev) +{ + u32 temp; + u32 temp2; + + serverworks_private.svrwrks_dev = pdev; + + agp_bridge.masks = serverworks_masks; + agp_bridge.num_of_masks = 1; + agp_bridge.aperture_sizes = (void *) serverworks_sizes; + agp_bridge.size_type = LVL2_APER_SIZE; + agp_bridge.num_aperture_sizes = 7; + agp_bridge.dev_private_data = (void *) &serverworks_private; + agp_bridge.needs_scratch_page = TRUE; + agp_bridge.configure = serverworks_configure; + agp_bridge.fetch_size = serverworks_fetch_size; + agp_bridge.cleanup = serverworks_cleanup; + agp_bridge.tlb_flush = serverworks_tlbflush; + agp_bridge.mask_memory = serverworks_mask_memory; + agp_bridge.agp_enable = serverworks_agp_enable; + agp_bridge.cache_flush = global_cache_flush; + agp_bridge.create_gatt_table = serverworks_create_gatt_table; + agp_bridge.free_gatt_table = serverworks_free_gatt_table; + agp_bridge.insert_memory = serverworks_insert_memory; + agp_bridge.remove_memory = serverworks_remove_memory; + agp_bridge.alloc_by_type = agp_generic_alloc_by_type; + agp_bridge.free_by_type = agp_generic_free_by_type; + agp_bridge.agp_alloc_page = agp_generic_alloc_page; + agp_bridge.agp_destroy_page = agp_generic_destroy_page; + + pci_read_config_dword(agp_bridge.dev, + SVWRKS_APSIZE, + &temp); + + serverworks_private.gart_addr_ofs = 0x10; + + if(temp & PCI_BASE_ADDRESS_MEM_TYPE_64) { + pci_read_config_dword(agp_bridge.dev, + SVWRKS_APSIZE + 4, + &temp2); + if(temp2 != 0) { + printk("Detected 64 bit aperture address, but top " + "bits are not zero. Disabling agp\n"); + return -ENODEV; + } + serverworks_private.mm_addr_ofs = 0x18; + } else { + serverworks_private.mm_addr_ofs = 0x14; + } + + pci_read_config_dword(agp_bridge.dev, + serverworks_private.mm_addr_ofs, + &temp); + if(temp & PCI_BASE_ADDRESS_MEM_TYPE_64) { + pci_read_config_dword(agp_bridge.dev, + serverworks_private.mm_addr_ofs + 4, + &temp2); + if(temp2 != 0) { + printk("Detected 64 bit MMIO address, but top " + "bits are not zero. Disabling agp\n"); + return -ENODEV; + } + } + + return 0; +} + +#endif /* CONFIG_AGP_SWORKS */ + + +/* per-chipset initialization data. + * note -- all chipsets for a single vendor MUST be grouped together + */ +static struct { + unsigned short device_id; /* first, to make table easier to read */ + unsigned short vendor_id; + enum chipset_type chipset; + const char *vendor_name; + const char *chipset_name; + int (*chipset_setup) (struct pci_dev *pdev); +} agp_bridge_info[] __initdata = { + +#ifdef CONFIG_AGP_ALI + { PCI_DEVICE_ID_AL_M1541_0, + PCI_VENDOR_ID_AL, + ALI_M1541, + "Ali", + "M1541", + ali_generic_setup }, + { PCI_DEVICE_ID_AL_M1621_0, + PCI_VENDOR_ID_AL, + ALI_M1621, + "Ali", + "M1621", + ali_generic_setup }, + { PCI_DEVICE_ID_AL_M1631_0, + PCI_VENDOR_ID_AL, + ALI_M1631, + "Ali", + "M1631", + ali_generic_setup }, + { PCI_DEVICE_ID_AL_M1632_0, + PCI_VENDOR_ID_AL, + ALI_M1632, + "Ali", + "M1632", + ali_generic_setup }, + { PCI_DEVICE_ID_AL_M1641_0, + PCI_VENDOR_ID_AL, + ALI_M1641, + "Ali", + "M1641", + ali_generic_setup }, + { PCI_DEVICE_ID_AL_M1647_0, + PCI_VENDOR_ID_AL, + ALI_M1647, + "Ali", + "M1647", + ali_generic_setup }, + { PCI_DEVICE_ID_AL_M1651_0, + PCI_VENDOR_ID_AL, + ALI_M1651, + "Ali", + "M1651", + ali_generic_setup }, + { 0, + PCI_VENDOR_ID_AL, + ALI_GENERIC, + "Ali", + "Generic", + ali_generic_setup }, +#endif /* CONFIG_AGP_ALI */ + +#ifdef CONFIG_AGP_AMD + { PCI_DEVICE_ID_AMD_IRONGATE_0, + PCI_VENDOR_ID_AMD, + AMD_IRONGATE, + "AMD", + "Irongate", + amd_irongate_setup }, + { 0, + PCI_VENDOR_ID_AMD, + AMD_GENERIC, + "AMD", + "Generic", + amd_irongate_setup }, +#endif /* CONFIG_AGP_AMD */ + +#ifdef CONFIG_AGP_INTEL + { PCI_DEVICE_ID_INTEL_82443LX_0, + PCI_VENDOR_ID_INTEL, + INTEL_LX, + "Intel", + "440LX", + intel_generic_setup }, + { PCI_DEVICE_ID_INTEL_82443BX_0, + PCI_VENDOR_ID_INTEL, + INTEL_BX, + "Intel", + "440BX", + intel_generic_setup }, + { PCI_DEVICE_ID_INTEL_82443GX_0, + PCI_VENDOR_ID_INTEL, + INTEL_GX, + "Intel", + "440GX", + intel_generic_setup }, + /* could we add support for PCI_DEVICE_ID_INTEL_815_1 too ? */ + { PCI_DEVICE_ID_INTEL_815_0, + PCI_VENDOR_ID_INTEL, + INTEL_I815, + "Intel", + "i815", + intel_generic_setup }, + { PCI_DEVICE_ID_INTEL_840_0, + PCI_VENDOR_ID_INTEL, + INTEL_I840, + "Intel", + "i840", + intel_840_setup }, + { PCI_DEVICE_ID_INTEL_850_0, + PCI_VENDOR_ID_INTEL, + INTEL_I850, + "Intel", + "i850", + intel_850_setup }, + { 0, + PCI_VENDOR_ID_INTEL, + INTEL_GENERIC, + "Intel", + "Generic", + intel_generic_setup }, +#endif /* CONFIG_AGP_INTEL */ + +#ifdef CONFIG_AGP_SIS + { PCI_DEVICE_ID_SI_630, + PCI_VENDOR_ID_SI, + SIS_GENERIC, + "SiS", + "630", + sis_generic_setup }, + { PCI_DEVICE_ID_SI_540, + PCI_VENDOR_ID_SI, + SIS_GENERIC, + "SiS", + "540", + sis_generic_setup }, + { PCI_DEVICE_ID_SI_620, + PCI_VENDOR_ID_SI, + SIS_GENERIC, + "SiS", + "620", + sis_generic_setup }, + { PCI_DEVICE_ID_SI_530, + PCI_VENDOR_ID_SI, + SIS_GENERIC, + "SiS", + "530", + sis_generic_setup }, + { PCI_DEVICE_ID_SI_630, + PCI_VENDOR_ID_SI, + SIS_GENERIC, + "SiS", + "Generic", + sis_generic_setup }, + { PCI_DEVICE_ID_SI_540, + PCI_VENDOR_ID_SI, + SIS_GENERIC, + "SiS", + "Generic", + sis_generic_setup }, + { PCI_DEVICE_ID_SI_620, + PCI_VENDOR_ID_SI, + SIS_GENERIC, + "SiS", + "Generic", + sis_generic_setup }, + { PCI_DEVICE_ID_SI_530, + PCI_VENDOR_ID_SI, + SIS_GENERIC, + "SiS", + "Generic", + sis_generic_setup }, + { 0, + PCI_VENDOR_ID_SI, + SIS_GENERIC, + "SiS", + "Generic", + sis_generic_setup }, +#endif /* CONFIG_AGP_SIS */ + +#ifdef CONFIG_AGP_VIA + { PCI_DEVICE_ID_VIA_8501_0, + PCI_VENDOR_ID_VIA, + VIA_MVP4, + "Via", + "MVP4", + via_generic_setup }, + { PCI_DEVICE_ID_VIA_82C597_0, + PCI_VENDOR_ID_VIA, + VIA_VP3, + "Via", + "VP3", + via_generic_setup }, + { PCI_DEVICE_ID_VIA_82C598_0, + PCI_VENDOR_ID_VIA, + VIA_MVP3, + "Via", + "MVP3", + via_generic_setup }, + { PCI_DEVICE_ID_VIA_82C691_0, + PCI_VENDOR_ID_VIA, + VIA_APOLLO_PRO, + "Via", + "Apollo Pro", + via_generic_setup }, + { PCI_DEVICE_ID_VIA_8371_0, + PCI_VENDOR_ID_VIA, + VIA_APOLLO_KX133, + "Via", + "Apollo Pro KX133", + via_generic_setup }, + { PCI_DEVICE_ID_VIA_8363_0, + PCI_VENDOR_ID_VIA, + VIA_APOLLO_KT133, + "Via", + "Apollo Pro KT133", + via_generic_setup }, + { 0, + PCI_VENDOR_ID_VIA, + VIA_GENERIC, + "Via", + "Generic", + via_generic_setup }, +#endif /* CONFIG_AGP_VIA */ + + { 0, }, /* dummy final entry, always present */ +}; + + +/* scan table above for supported devices */ +static int __init agp_lookup_host_bridge (struct pci_dev *pdev) +{ + int i; + + for (i = 0; i < ARRAY_SIZE (agp_bridge_info); i++) + if (pdev->vendor == agp_bridge_info[i].vendor_id) + break; + + if (i >= ARRAY_SIZE (agp_bridge_info)) { + printk (KERN_DEBUG PFX "unsupported bridge\n"); + return -ENODEV; + } + + while ((i < ARRAY_SIZE (agp_bridge_info)) && + (agp_bridge_info[i].vendor_id == pdev->vendor)) { + if (pdev->device == agp_bridge_info[i].device_id) { +#ifdef CONFIG_AGP_ALI + if (pdev->device == PCI_DEVICE_ID_AL_M1621_0) { + u8 hidden_1621_id; + + pci_read_config_byte(pdev, 0xFB, &hidden_1621_id); + switch (hidden_1621_id) { + case 0x31: + agp_bridge_info[i].chipset_name="M1631"; + break; + case 0x32: + agp_bridge_info[i].chipset_name="M1632"; + break; + case 0x41: + agp_bridge_info[i].chipset_name="M1641"; + break; + case 0x43: + break; + case 0x47: + agp_bridge_info[i].chipset_name="M1647"; + break; + case 0x51: + agp_bridge_info[i].chipset_name="M1651"; + break; + default: + break; + } + } +#endif + + printk (KERN_INFO PFX "Detected %s %s chipset\n", + agp_bridge_info[i].vendor_name, + agp_bridge_info[i].chipset_name); + agp_bridge.type = agp_bridge_info[i].chipset; + return agp_bridge_info[i].chipset_setup (pdev); + } + + i++; + } + + i--; /* point to vendor generic entry (device_id == 0) */ + + /* try init anyway, if user requests it AND + * there is a 'generic' bridge entry for this vendor */ + if (agp_try_unsupported && agp_bridge_info[i].device_id == 0) { + printk(KERN_WARNING PFX "Trying generic %s routines" + " for device id: %04x\n", + agp_bridge_info[i].vendor_name, pdev->device); + agp_bridge.type = agp_bridge_info[i].chipset; + return agp_bridge_info[i].chipset_setup (pdev); + } + + printk(KERN_ERR PFX "Unsupported %s chipset (device id: %04x)," + " you might want to try agp_try_unsupported=1.\n", + agp_bridge_info[i].vendor_name, pdev->device); + return -ENODEV; +} + + +/* Supported Device Scanning routine */ + +static int __init agp_find_supported_device(void) +{ + struct pci_dev *dev = NULL; + u8 cap_ptr = 0x00; + u32 cap_id, scratch; + + if ((dev = pci_find_class(PCI_CLASS_BRIDGE_HOST << 8, NULL)) == NULL) + return -ENODEV; + + agp_bridge.dev = dev; + + /* Need to test for I810 here */ +#ifdef CONFIG_AGP_I810 + if (dev->vendor == PCI_VENDOR_ID_INTEL) { + struct pci_dev *i810_dev; + + switch (dev->device) { + case PCI_DEVICE_ID_INTEL_810_0: + i810_dev = pci_find_device(PCI_VENDOR_ID_INTEL, + PCI_DEVICE_ID_INTEL_810_1, + NULL); + if (i810_dev == NULL) { + printk(KERN_ERR PFX "Detected an Intel i810," + " but could not find the secondary" + " device.\n"); + return -ENODEV; + } + printk(KERN_INFO PFX "Detected an Intel " + "i810 Chipset.\n"); + agp_bridge.type = INTEL_I810; + return intel_i810_setup (i810_dev); + + case PCI_DEVICE_ID_INTEL_810_DC100_0: + i810_dev = pci_find_device(PCI_VENDOR_ID_INTEL, + PCI_DEVICE_ID_INTEL_810_DC100_1, + NULL); + if (i810_dev == NULL) { + printk(KERN_ERR PFX "Detected an Intel i810 " + "DC100, but could not find the " + "secondary device.\n"); + return -ENODEV; + } + printk(KERN_INFO PFX "Detected an Intel i810 " + "DC100 Chipset.\n"); + agp_bridge.type = INTEL_I810; + return intel_i810_setup(i810_dev); + + case PCI_DEVICE_ID_INTEL_810_E_0: + i810_dev = pci_find_device(PCI_VENDOR_ID_INTEL, + PCI_DEVICE_ID_INTEL_810_E_1, + NULL); + if (i810_dev == NULL) { + printk(KERN_ERR PFX "Detected an Intel i810 E" + ", but could not find the secondary " + "device.\n"); + return -ENODEV; + } + printk(KERN_INFO PFX "Detected an Intel i810 E " + "Chipset.\n"); + agp_bridge.type = INTEL_I810; + return intel_i810_setup(i810_dev); + + case PCI_DEVICE_ID_INTEL_815_0: + /* The i815 can operate either as an i810 style + * integrated device, or as an AGP4X motherboard. + * + * This only addresses the first mode: + */ + i810_dev = pci_find_device(PCI_VENDOR_ID_INTEL, + PCI_DEVICE_ID_INTEL_815_1, + NULL); + if (i810_dev == NULL) { + printk(KERN_ERR PFX "agpgart: Detected an " + "Intel i815, but could not find the" + " secondary device. Assuming a " + "non-integrated video card.\n"); + break; + } + printk(KERN_INFO PFX "agpgart: Detected an Intel i815 " + "Chipset.\n"); + agp_bridge.type = INTEL_I810; + return intel_i810_setup(i810_dev); + + default: + break; + } + } +#endif /* CONFIG_AGP_I810 */ + +#ifdef CONFIG_AGP_SWORKS + /* Everything is on func 1 here so we are hardcoding function one */ + if (dev->vendor == PCI_VENDOR_ID_SERVERWORKS) { + struct pci_dev *bridge_dev; + + bridge_dev = pci_find_slot ((unsigned int)dev->bus->number, + PCI_DEVFN(0, 1)); + if(bridge_dev == NULL) { + printk(KERN_INFO PFX "agpgart: Detected a Serverworks " + "Chipset, but could not find the secondary " + "device.\n"); + return -ENODEV; + } + + switch (dev->device) { + case PCI_DEVICE_ID_SERVERWORKS_HE: + agp_bridge.type = SVWRKS_HE; + return serverworks_setup(bridge_dev); + + case PCI_DEVICE_ID_SERVERWORKS_LE: + case 0x0007: + agp_bridge.type = SVWRKS_LE; + return serverworks_setup(bridge_dev); + + default: + if(agp_try_unsupported) { + agp_bridge.type = SVWRKS_GENERIC; + return serverworks_setup(bridge_dev); + } + break; + } + } + +#endif /* CONFIG_AGP_SWORKS */ + + /* find capndx */ + pci_read_config_dword(dev, 0x04, &scratch); + if (!(scratch & 0x00100000)) + return -ENODEV; + + pci_read_config_byte(dev, 0x34, &cap_ptr); + if (cap_ptr != 0x00) { + do { + pci_read_config_dword(dev, cap_ptr, &cap_id); + + if ((cap_id & 0xff) != 0x02) + cap_ptr = (cap_id >> 8) & 0xff; + } + while (((cap_id & 0xff) != 0x02) && (cap_ptr != 0x00)); + } + if (cap_ptr == 0x00) + return -ENODEV; + agp_bridge.capndx = cap_ptr; + + /* Fill in the mode register */ + pci_read_config_dword(agp_bridge.dev, + agp_bridge.capndx + 4, + &agp_bridge.mode); + + /* probe for known chipsets */ + return agp_lookup_host_bridge (dev); +} + +struct agp_max_table { + int mem; + int agp; +}; + +static struct agp_max_table maxes_table[9] __initdata = +{ + {0, 0}, + {32, 4}, + {64, 28}, + {128, 96}, + {256, 204}, + {512, 440}, + {1024, 942}, + {2048, 1920}, + {4096, 3932} +}; + +static int __init agp_find_max (void) +{ + long memory, index, result; + + memory = virt_to_phys(high_memory) >> 20; + index = 1; + + while ((memory > maxes_table[index].mem) && + (index < 8)) { + index++; + } + + result = maxes_table[index - 1].agp + + ( (memory - maxes_table[index - 1].mem) * + (maxes_table[index].agp - maxes_table[index - 1].agp)) / + (maxes_table[index].mem - maxes_table[index - 1].mem); + + printk(KERN_INFO PFX "Maximum main memory to use " + "for agp memory: %ldM\n", result); + result <<= (20 - PAGE_SHIFT); /* convert to pages */ + result *= PAGE_SIZE / AGP_PAGE_SIZE; /* convert to AGP pages */ + return result; +} + +#define AGPGART_VERSION_MAJOR 0 +#define AGPGART_VERSION_MINOR 99 + +static agp_version agp_current_version = +{ + AGPGART_VERSION_MAJOR, + AGPGART_VERSION_MINOR +}; + +static int __init agp_backend_initialize(void) +{ + int size_value, rc, got_gatt=0, got_keylist=0; + + memset(&agp_bridge, 0, sizeof(struct agp_bridge_data)); + agp_bridge.type = NOT_SUPPORTED; + agp_bridge.max_memory_agp = agp_find_max(); + agp_bridge.version = &agp_current_version; + + rc = agp_find_supported_device(); + if (rc) { + /* not KERN_ERR because error msg should have already printed */ + printk(KERN_DEBUG PFX "no supported devices found.\n"); + return rc; + } + + if (agp_bridge.needs_scratch_page == TRUE) { + agp_bridge.scratch_page = agp_bridge.agp_alloc_page(); + + if (agp_bridge.scratch_page == 0) { + printk(KERN_ERR PFX "unable to get memory for " + "scratch page.\n"); + return -ENOMEM; + } + agp_bridge.scratch_page = + virt_to_phys((void *) agp_bridge.scratch_page); + agp_bridge.scratch_page = + agp_bridge.mask_memory(agp_bridge.scratch_page, 0); + } + + size_value = agp_bridge.fetch_size(); + + if (size_value == 0) { + printk(KERN_ERR PFX "unable to detrimine aperture size.\n"); + rc = -EINVAL; + goto err_out; + } + if (agp_bridge.create_gatt_table()) { + printk(KERN_ERR PFX "unable to get memory for graphics " + "translation table.\n"); + rc = -ENOMEM; + goto err_out; + } + got_gatt = 1; + + agp_bridge.key_list = vmalloc(MAXKEY/8); + if (agp_bridge.key_list == NULL) { + printk(KERN_ERR PFX "error allocating memory for key lists.\n"); + rc = -ENOMEM; + goto err_out; + } + got_keylist = 1; + + /* FIXME vmalloc'd memory not guaranteed contiguous */ + memset(agp_bridge.key_list, 0, MAXKEY/8); + + if (agp_bridge.configure()) { + printk(KERN_ERR PFX "error configuring host chipset.\n"); + rc = -EINVAL; + goto err_out; + } + + printk(KERN_INFO PFX "AGP aperture is %dM @ 0x%lx\n", + size_value, agp_bridge.gart_bus_addr); + + return 0; + +err_out: + if (agp_bridge.needs_scratch_page == TRUE) { + agp_bridge.scratch_page &= ~(AGP_PAGE_SIZE-1); + agp_bridge.agp_destroy_page((unsigned long) + phys_to_virt(agp_bridge.scratch_page)); + } + if (got_gatt) + agp_bridge.free_gatt_table(); + if (got_keylist) + vfree(agp_bridge.key_list); + return rc; +} + + +/* cannot be __exit b/c as it could be called from __init code */ +static void agp_backend_cleanup(void) +{ + agp_bridge.cleanup(); + agp_bridge.free_gatt_table(); + vfree(agp_bridge.key_list); + + if (agp_bridge.needs_scratch_page == TRUE) { + agp_bridge.scratch_page &= ~(0x00000fff); + agp_bridge.agp_destroy_page((unsigned long) + phys_to_virt(agp_bridge.scratch_page)); + } +} + +extern int agp_frontend_initialize(void); +extern void agp_frontend_cleanup(void); + +static const drm_agp_t drm_agp = { + &agp_free_memory, + &agp_allocate_memory, + &agp_bind_memory, + &agp_unbind_memory, + &agp_enable, + &agp_backend_acquire, + &agp_backend_release, + &agp_copy_info +}; + +static int __init agp_init(void) +{ + int ret_val; + + printk(KERN_INFO "Linux agpgart interface v%d.%d (c) Jeff Hartmann\n", + AGPGART_VERSION_MAJOR, AGPGART_VERSION_MINOR); + + ret_val = agp_backend_initialize(); + if (ret_val) { + agp_bridge.type = NOT_SUPPORTED; + return ret_val; + } + ret_val = agp_frontend_initialize(); + if (ret_val) { + agp_bridge.type = NOT_SUPPORTED; + agp_backend_cleanup(); + return ret_val; + } + + inter_module_register("drm_agp", THIS_MODULE, &drm_agp); + return 0; +} + +static void __exit agp_cleanup(void) +{ + agp_frontend_cleanup(); + agp_backend_cleanup(); + inter_module_unregister("drm_agp"); +} + +module_init(agp_init); +module_exit(agp_cleanup); diff -urpN linux-2.4.9-linus/drivers/char/agp/agpgart_fe.c linux-2.4.9-larpage/drivers/char/agp/agpgart_fe.c --- linux-2.4.9-linus/drivers/char/agp/agpgart_fe.c 2001-08-12 10:38:48.000000000 -0700 +++ linux-2.4.9-larpage/drivers/char/agp/agpgart_fe.c 2002-11-20 02:02:35.000000000 -0800 @@ -110,8 +110,8 @@ static agp_segment_priv *agp_find_seg_in agp_segment_priv *seg; int num_segments, pg_start, pg_count, i; - pg_start = offset / 4096; - pg_count = size / 4096; + pg_start = offset / AGP_PAGE_SIZE; + pg_count = size / AGP_PAGE_SIZE; seg = *(client->segments); num_segments = client->num_segments; @@ -623,7 +623,7 @@ static int agp_mmap(struct file *file, s size = vma->vm_end - vma->vm_start; current_size = kerninfo.aper_size; current_size = current_size * 0x100000; - offset = vma->vm_pgoff << PAGE_SHIFT; + offset = vma->vm_pgoff << MMUPAGE_SHIFT; if (test_bit(AGP_FF_IS_CLIENT, &priv->access_flags)) { if ((size + offset) > current_size) { diff -urpN linux-2.4.9-linus/drivers/char/console.c linux-2.4.9-larpage/drivers/char/console.c --- linux-2.4.9-linus/drivers/char/console.c 2001-08-12 10:02:17.000000000 -0700 +++ linux-2.4.9-larpage/drivers/char/console.c 2002-11-20 02:02:35.000000000 -0800 @@ -1798,8 +1798,8 @@ static void do_con_trol(struct tty_struc * since console_init (and thus con_init) are called before any * kernel memory allocation is available. */ -char con_buf[PAGE_SIZE]; -#define CON_BUF_SIZE PAGE_SIZE +#define CON_BUF_SIZE MMUPAGE_SIZE +char con_buf[CON_BUF_SIZE]; DECLARE_MUTEX(con_buf_sem); static int do_con_write(struct tty_struct * tty, int from_user, diff -urpN linux-2.4.9-linus/drivers/char/drm/drmP.h linux-2.4.9-larpage/drivers/char/drm/drmP.h --- linux-2.4.9-linus/drivers/char/drm/drmP.h 2001-08-15 14:21:47.000000000 -0700 +++ linux-2.4.9-larpage/drivers/char/drm/drmP.h 2002-11-20 02:02:37.000000000 -0800 @@ -147,7 +147,7 @@ #define DRM_MEM_STUB 19 #define DRM_MEM_SGLISTS 20 -#define DRM_MAX_CTXBITMAP (PAGE_SIZE * 8) +#define DRM_MAX_CTXBITMAP (4096 * 8) /* Backward compatibility section */ /* _PAGE_WT changed to _PAGE_PWT in 2.2.6 */ @@ -170,7 +170,7 @@ typedef struct wait_queue *wait_queue_he #if LINUX_VERSION_CODE < 0x020319 #define VM_OFFSET(vma) ((vma)->vm_offset) #else -#define VM_OFFSET(vma) ((vma)->vm_pgoff << PAGE_SHIFT) +#define VM_OFFSET(vma) ((vma)->vm_pgoff << MMUPAGE_SHIFT) #endif /* *_nopage return values defined in 2.3.26 */ @@ -511,7 +511,7 @@ typedef struct drm_buf_entry { int buf_count; drm_buf_t *buflist; int seg_count; - int page_order; + int page_order; /* in PAGE_SIZE units */ unsigned long *seglist; drm_freelist_t freelist; @@ -602,7 +602,7 @@ typedef struct drm_agp_mem { unsigned long handle; agp_memory *memory; unsigned long bound; /* address */ - int pages; + int pages; /* in AGP_PAGE_SIZE units */ struct drm_agp_mem *prev; struct drm_agp_mem *next; } drm_agp_mem_t; diff -urpN linux-2.4.9-linus/drivers/char/drm/drmP.h.orig linux-2.4.9-larpage/drivers/char/drm/drmP.h.orig --- linux-2.4.9-linus/drivers/char/drm/drmP.h.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/drivers/char/drm/drmP.h.orig 2002-11-20 02:02:37.000000000 -0800 @@ -0,0 +1,1025 @@ +/* drmP.h -- Private header for Direct Rendering Manager -*- linux-c -*- + * Created: Mon Jan 4 10:05:05 1999 by faith@precisioninsight.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice (including the next + * paragraph) shall be included in all copies or substantial portions of the + * Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: + * Rickard E. (Rik) Faith + * Gareth Hughes + */ + +#ifndef _DRM_P_H_ +#define _DRM_P_H_ + +#ifdef __KERNEL__ +#ifdef __alpha__ +/* add include of current.h so that "current" is defined + * before static inline funcs in wait.h. Doing this so we + * can build the DRM (part of PI DRI). 4/21/2000 S + B */ +#include +#endif /* __alpha__ */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include /* For (un)lock_kernel */ +#include +#if defined(__alpha__) || defined(__powerpc__) +#include /* For pte_wrprotect */ +#endif +#include +#include +#include +#ifdef CONFIG_MTRR +#include +#endif +#if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE) +#include +#include +#endif +#if LINUX_VERSION_CODE >= 0x020100 /* KERNEL_VERSION(2,1,0) */ +#include +#include +#endif +#if LINUX_VERSION_CODE < 0x020400 +#include "compat-pre24.h" +#endif +#include +#include "drm.h" + +/* DRM template customization defaults + */ +#ifndef __HAVE_AGP +#define __HAVE_AGP 0 +#endif +#ifndef __HAVE_MTRR +#define __HAVE_MTRR 0 +#endif +#ifndef __HAVE_CTX_BITMAP +#define __HAVE_CTX_BITMAP 0 +#endif +#ifndef __HAVE_DMA +#define __HAVE_DMA 0 +#endif +#ifndef __HAVE_DMA_IRQ +#define __HAVE_DMA_IRQ 0 +#endif +#ifndef __HAVE_DMA_WAITLIST +#define __HAVE_DMA_WAITLIST 0 +#endif +#ifndef __HAVE_DMA_FREELIST +#define __HAVE_DMA_FREELIST 0 +#endif +#ifndef __HAVE_DMA_HISTOGRAM +#define __HAVE_DMA_HISTOGRAM 0 +#endif + +#define __REALLY_HAVE_AGP (__HAVE_AGP && (defined(CONFIG_AGP) || \ + defined(CONFIG_AGP_MODULE))) +#define __REALLY_HAVE_MTRR (__HAVE_MTRR && defined(CONFIG_MTRR)) + + +/* Begin the DRM... + */ + +#define DRM_DEBUG_CODE 2 /* Include debugging code (if > 1, then + also include looping detection. */ + +#define DRM_HASH_SIZE 16 /* Size of key hash table */ +#define DRM_KERNEL_CONTEXT 0 /* Change drm_resctx if changed */ +#define DRM_RESERVED_CONTEXTS 1 /* Change drm_resctx if changed */ +#define DRM_LOOPING_LIMIT 5000000 +#define DRM_BSZ 1024 /* Buffer size for /dev/drm? output */ +#define DRM_TIME_SLICE (HZ/20) /* Time slice for GLXContexts */ +#define DRM_LOCK_SLICE 1 /* Time slice for lock, in jiffies */ + +#define DRM_FLAG_DEBUG 0x01 +#define DRM_FLAG_NOCTX 0x02 + +#define DRM_MEM_DMA 0 +#define DRM_MEM_SAREA 1 +#define DRM_MEM_DRIVER 2 +#define DRM_MEM_MAGIC 3 +#define DRM_MEM_IOCTLS 4 +#define DRM_MEM_MAPS 5 +#define DRM_MEM_VMAS 6 +#define DRM_MEM_BUFS 7 +#define DRM_MEM_SEGS 8 +#define DRM_MEM_PAGES 9 +#define DRM_MEM_FILES 10 +#define DRM_MEM_QUEUES 11 +#define DRM_MEM_CMDS 12 +#define DRM_MEM_MAPPINGS 13 +#define DRM_MEM_BUFLISTS 14 +#define DRM_MEM_AGPLISTS 15 +#define DRM_MEM_TOTALAGP 16 +#define DRM_MEM_BOUNDAGP 17 +#define DRM_MEM_CTXBITMAP 18 +#define DRM_MEM_STUB 19 +#define DRM_MEM_SGLISTS 20 + +#define DRM_MAX_CTXBITMAP (4096 * 8) + + /* Backward compatibility section */ + /* _PAGE_WT changed to _PAGE_PWT in 2.2.6 */ +#ifndef _PAGE_PWT +#define _PAGE_PWT _PAGE_WT +#endif + /* Wait queue declarations changed in 2.3.1 */ +#ifndef DECLARE_WAITQUEUE +#define DECLARE_WAITQUEUE(w,c) struct wait_queue w = { c, NULL } +typedef struct wait_queue *wait_queue_head_t; +#define init_waitqueue_head(q) *q = NULL; +#endif + + /* _PAGE_4M changed to _PAGE_PSE in 2.3.23 */ +#ifndef _PAGE_PSE +#define _PAGE_PSE _PAGE_4M +#endif + + /* vm_offset changed to vm_pgoff in 2.3.25 */ +#if LINUX_VERSION_CODE < 0x020319 +#define VM_OFFSET(vma) ((vma)->vm_offset) +#else +#define VM_OFFSET(vma) ((vma)->vm_pgoff << MMUPAGE_SHIFT) +#endif + + /* *_nopage return values defined in 2.3.26 */ +#ifndef NOPAGE_SIGBUS +#define NOPAGE_SIGBUS 0 +#endif +#ifndef NOPAGE_OOM +#define NOPAGE_OOM 0 +#endif + + /* module_init/module_exit added in 2.3.13 */ +#ifndef module_init +#define module_init(x) int init_module(void) { return x(); } +#endif +#ifndef module_exit +#define module_exit(x) void cleanup_module(void) { x(); } +#endif + + /* Generic cmpxchg added in 2.3.x */ +#ifndef __HAVE_ARCH_CMPXCHG + /* Include this here so that driver can be + used with older kernels. */ +#if defined(__alpha__) +static __inline__ unsigned long +__cmpxchg_u32(volatile int *m, int old, int new) +{ + unsigned long prev, cmp; + + __asm__ __volatile__( + "1: ldl_l %0,%5\n" + " cmpeq %0,%3,%1\n" + " beq %1,2f\n" + " mov %4,%1\n" + " stl_c %1,%2\n" + " beq %1,3f\n" + "2: mb\n" + ".subsection 2\n" + "3: br 1b\n" + ".previous" + : "=&r"(prev), "=&r"(cmp), "=m"(*m) + : "r"((long) old), "r"(new), "m"(*m) + : "memory" ); + + return prev; +} + +static __inline__ unsigned long +__cmpxchg_u64(volatile long *m, unsigned long old, unsigned long new) +{ + unsigned long prev, cmp; + + __asm__ __volatile__( + "1: ldq_l %0,%5\n" + " cmpeq %0,%3,%1\n" + " beq %1,2f\n" + " mov %4,%1\n" + " stq_c %1,%2\n" + " beq %1,3f\n" + "2: mb\n" + ".subsection 2\n" + "3: br 1b\n" + ".previous" + : "=&r"(prev), "=&r"(cmp), "=m"(*m) + : "r"((long) old), "r"(new), "m"(*m) + : "memory" ); + + return prev; +} + +static __inline__ unsigned long +__cmpxchg(volatile void *ptr, unsigned long old, unsigned long new, int size) +{ + switch (size) { + case 4: + return __cmpxchg_u32(ptr, old, new); + case 8: + return __cmpxchg_u64(ptr, old, new); + } + return old; +} +#define cmpxchg(ptr,o,n) \ + ({ \ + __typeof__(*(ptr)) _o_ = (o); \ + __typeof__(*(ptr)) _n_ = (n); \ + (__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long)_o_, \ + (unsigned long)_n_, sizeof(*(ptr))); \ + }) + +#elif __i386__ +static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old, + unsigned long new, int size) +{ + unsigned long prev; + switch (size) { + case 1: + __asm__ __volatile__(LOCK_PREFIX "cmpxchgb %b1,%2" + : "=a"(prev) + : "q"(new), "m"(*__xg(ptr)), "0"(old) + : "memory"); + return prev; + case 2: + __asm__ __volatile__(LOCK_PREFIX "cmpxchgw %w1,%2" + : "=a"(prev) + : "q"(new), "m"(*__xg(ptr)), "0"(old) + : "memory"); + return prev; + case 4: + __asm__ __volatile__(LOCK_PREFIX "cmpxchgl %1,%2" + : "=a"(prev) + : "q"(new), "m"(*__xg(ptr)), "0"(old) + : "memory"); + return prev; + } + return old; +} + +#elif defined(__powerpc__) +extern void __cmpxchg_called_with_bad_pointer(void); +static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old, + unsigned long new, int size) +{ + unsigned long prev; + + switch (size) { + case 4: + __asm__ __volatile__( + "sync;" + "0: lwarx %0,0,%1 ;" + " cmpl 0,%0,%3;" + " bne 1f;" + " stwcx. %2,0,%1;" + " bne- 0b;" + "1: " + "sync;" + : "=&r"(prev) + : "r"(ptr), "r"(new), "r"(old) + : "cr0", "memory"); + return prev; + } + __cmpxchg_called_with_bad_pointer(); + return old; +} + +#endif /* i386, powerpc & alpha */ + +#ifndef __alpha__ +#define cmpxchg(ptr,o,n) \ + ((__typeof__(*(ptr)))__cmpxchg((ptr),(unsigned long)(o), \ + (unsigned long)(n),sizeof(*(ptr)))) +#endif + +#endif /* !__HAVE_ARCH_CMPXCHG */ + + /* Macros to make printk easier */ +#define DRM_ERROR(fmt, arg...) \ + printk(KERN_ERR "[" DRM_NAME ":" __FUNCTION__ "] *ERROR* " fmt , ##arg) +#define DRM_MEM_ERROR(area, fmt, arg...) \ + printk(KERN_ERR "[" DRM_NAME ":" __FUNCTION__ ":%s] *ERROR* " fmt , \ + DRM(mem_stats)[area].name , ##arg) +#define DRM_INFO(fmt, arg...) printk(KERN_INFO "[" DRM_NAME "] " fmt , ##arg) + +#if DRM_DEBUG_CODE +#define DRM_DEBUG(fmt, arg...) \ + do { \ + if ( DRM(flags) & DRM_FLAG_DEBUG ) \ + printk(KERN_DEBUG \ + "[" DRM_NAME ":" __FUNCTION__ "] " fmt , \ + ##arg); \ + } while (0) +#else +#define DRM_DEBUG(fmt, arg...) do { } while (0) +#endif + +#define DRM_PROC_LIMIT (PAGE_SIZE-80) + +#define DRM_PROC_PRINT(fmt, arg...) \ + len += sprintf(&buf[len], fmt , ##arg); \ + if (len > DRM_PROC_LIMIT) { *eof = 1; return len - offset; } + +#define DRM_PROC_PRINT_RET(ret, fmt, arg...) \ + len += sprintf(&buf[len], fmt , ##arg); \ + if (len > DRM_PROC_LIMIT) { ret; *eof = 1; return len - offset; } + + /* Mapping helper macros */ +#define DRM_IOREMAP(map) \ + (map)->handle = DRM(ioremap)( (map)->offset, (map)->size ) + +#define DRM_IOREMAPFREE(map) \ + do { \ + if ( (map)->handle && (map)->size ) \ + DRM(ioremapfree)( (map)->handle, (map)->size ); \ + } while (0) + +#define DRM_FIND_MAP(_map, _o) \ +do { \ + struct list_head *_list; \ + list_for_each( _list, &dev->maplist->head ) { \ + drm_map_list_t *_entry = (drm_map_list_t *)_list; \ + if ( _entry->map && \ + _entry->map->offset == (_o) ) { \ + (_map) = _entry->map; \ + break; \ + } \ + } \ +} while(0) + + /* Internal types and structures */ +#define DRM_ARRAY_SIZE(x) (sizeof(x)/sizeof(x[0])) +#define DRM_MIN(a,b) ((a)<(b)?(a):(b)) +#define DRM_MAX(a,b) ((a)>(b)?(a):(b)) + +#define DRM_LEFTCOUNT(x) (((x)->rp + (x)->count - (x)->wp) % ((x)->count + 1)) +#define DRM_BUFCOUNT(x) ((x)->count - DRM_LEFTCOUNT(x)) +#define DRM_WAITCOUNT(dev,idx) DRM_BUFCOUNT(&dev->queuelist[idx]->waitlist) + +#define DRM_GET_PRIV_SAREA(_dev, _ctx, _map) do { \ + (_map) = (_dev)->context_sareas[_ctx]; \ +} while(0) + +typedef int drm_ioctl_t( struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg ); + +typedef struct drm_pci_list { + u16 vendor; + u16 device; +} drm_pci_list_t; + +typedef struct drm_ioctl_desc { + drm_ioctl_t *func; + int auth_needed; + int root_only; +} drm_ioctl_desc_t; + +typedef struct drm_devstate { + pid_t owner; /* X server pid holding x_lock */ + +} drm_devstate_t; + +typedef struct drm_magic_entry { + drm_magic_t magic; + struct drm_file *priv; + struct drm_magic_entry *next; +} drm_magic_entry_t; + +typedef struct drm_magic_head { + struct drm_magic_entry *head; + struct drm_magic_entry *tail; +} drm_magic_head_t; + +typedef struct drm_vma_entry { + struct vm_area_struct *vma; + struct drm_vma_entry *next; + pid_t pid; +} drm_vma_entry_t; + +typedef struct drm_buf { + int idx; /* Index into master buflist */ + int total; /* Buffer size */ + int order; /* log-base-2(total) */ + int used; /* Amount of buffer in use (for DMA) */ + unsigned long offset; /* Byte offset (used internally) */ + void *address; /* Address of buffer */ + unsigned long bus_address; /* Bus address of buffer */ + struct drm_buf *next; /* Kernel-only: used for free list */ + __volatile__ int waiting; /* On kernel DMA queue */ + __volatile__ int pending; /* On hardware DMA queue */ + wait_queue_head_t dma_wait; /* Processes waiting */ + pid_t pid; /* PID of holding process */ + int context; /* Kernel queue for this buffer */ + int while_locked;/* Dispatch this buffer while locked */ + enum { + DRM_LIST_NONE = 0, + DRM_LIST_FREE = 1, + DRM_LIST_WAIT = 2, + DRM_LIST_PEND = 3, + DRM_LIST_PRIO = 4, + DRM_LIST_RECLAIM = 5 + } list; /* Which list we're on */ + +#if DRM_DMA_HISTOGRAM + cycles_t time_queued; /* Queued to kernel DMA queue */ + cycles_t time_dispatched; /* Dispatched to hardware */ + cycles_t time_completed; /* Completed by hardware */ + cycles_t time_freed; /* Back on freelist */ +#endif + + int dev_priv_size; /* Size of buffer private stoarge */ + void *dev_private; /* Per-buffer private storage */ +} drm_buf_t; + +#if DRM_DMA_HISTOGRAM +#define DRM_DMA_HISTOGRAM_SLOTS 9 +#define DRM_DMA_HISTOGRAM_INITIAL 10 +#define DRM_DMA_HISTOGRAM_NEXT(current) ((current)*10) +typedef struct drm_histogram { + atomic_t total; + + atomic_t queued_to_dispatched[DRM_DMA_HISTOGRAM_SLOTS]; + atomic_t dispatched_to_completed[DRM_DMA_HISTOGRAM_SLOTS]; + atomic_t completed_to_freed[DRM_DMA_HISTOGRAM_SLOTS]; + + atomic_t queued_to_completed[DRM_DMA_HISTOGRAM_SLOTS]; + atomic_t queued_to_freed[DRM_DMA_HISTOGRAM_SLOTS]; + + atomic_t dma[DRM_DMA_HISTOGRAM_SLOTS]; + atomic_t schedule[DRM_DMA_HISTOGRAM_SLOTS]; + atomic_t ctx[DRM_DMA_HISTOGRAM_SLOTS]; + atomic_t lacq[DRM_DMA_HISTOGRAM_SLOTS]; + atomic_t lhld[DRM_DMA_HISTOGRAM_SLOTS]; +} drm_histogram_t; +#endif + + /* bufs is one longer than it has to be */ +typedef struct drm_waitlist { + int count; /* Number of possible buffers */ + drm_buf_t **bufs; /* List of pointers to buffers */ + drm_buf_t **rp; /* Read pointer */ + drm_buf_t **wp; /* Write pointer */ + drm_buf_t **end; /* End pointer */ + spinlock_t read_lock; + spinlock_t write_lock; +} drm_waitlist_t; + +typedef struct drm_freelist { + int initialized; /* Freelist in use */ + atomic_t count; /* Number of free buffers */ + drm_buf_t *next; /* End pointer */ + + wait_queue_head_t waiting; /* Processes waiting on free bufs */ + int low_mark; /* Low water mark */ + int high_mark; /* High water mark */ + atomic_t wfh; /* If waiting for high mark */ + spinlock_t lock; +} drm_freelist_t; + +typedef struct drm_buf_entry { + int buf_size; + int buf_count; + drm_buf_t *buflist; + int seg_count; + int page_order; /* in PAGE_SIZE units */ + unsigned long *seglist; + + drm_freelist_t freelist; +} drm_buf_entry_t; + +typedef struct drm_hw_lock { + __volatile__ unsigned int lock; + char padding[60]; /* Pad to cache line */ +} drm_hw_lock_t; + +typedef struct drm_file { + int authenticated; + int minor; + pid_t pid; + uid_t uid; + drm_magic_t magic; + unsigned long ioctl_count; + struct drm_file *next; + struct drm_file *prev; + struct drm_device *dev; + int remove_auth_on_close; +} drm_file_t; + + +typedef struct drm_queue { + atomic_t use_count; /* Outstanding uses (+1) */ + atomic_t finalization; /* Finalization in progress */ + atomic_t block_count; /* Count of processes waiting */ + atomic_t block_read; /* Queue blocked for reads */ + wait_queue_head_t read_queue; /* Processes waiting on block_read */ + atomic_t block_write; /* Queue blocked for writes */ + wait_queue_head_t write_queue; /* Processes waiting on block_write */ +#if 1 + atomic_t total_queued; /* Total queued statistic */ + atomic_t total_flushed;/* Total flushes statistic */ + atomic_t total_locks; /* Total locks statistics */ +#endif + drm_ctx_flags_t flags; /* Context preserving and 2D-only */ + drm_waitlist_t waitlist; /* Pending buffers */ + wait_queue_head_t flush_queue; /* Processes waiting until flush */ +} drm_queue_t; + +typedef struct drm_lock_data { + drm_hw_lock_t *hw_lock; /* Hardware lock */ + pid_t pid; /* PID of lock holder (0=kernel) */ + wait_queue_head_t lock_queue; /* Queue of blocked processes */ + unsigned long lock_time; /* Time of last lock in jiffies */ +} drm_lock_data_t; + +typedef struct drm_device_dma { +#if 0 + /* Performance Counters */ + atomic_t total_prio; /* Total DRM_DMA_PRIORITY */ + atomic_t total_bytes; /* Total bytes DMA'd */ + atomic_t total_dmas; /* Total DMA buffers dispatched */ + + atomic_t total_missed_dma; /* Missed drm_do_dma */ + atomic_t total_missed_lock; /* Missed lock in drm_do_dma */ + atomic_t total_missed_free; /* Missed drm_free_this_buffer */ + atomic_t total_missed_sched;/* Missed drm_dma_schedule */ + + atomic_t total_tried; /* Tried next_buffer */ + atomic_t total_hit; /* Sent next_buffer */ + atomic_t total_lost; /* Lost interrupt */ +#endif + + drm_buf_entry_t bufs[DRM_MAX_ORDER+1]; + int buf_count; + drm_buf_t **buflist; /* Vector of pointers info bufs */ + int seg_count; + int page_count; + unsigned long *pagelist; + unsigned long byte_count; + enum { + _DRM_DMA_USE_AGP = 0x01, + _DRM_DMA_USE_SG = 0x02 + } flags; + + /* DMA support */ + drm_buf_t *this_buffer; /* Buffer being sent */ + drm_buf_t *next_buffer; /* Selected buffer to send */ + drm_queue_t *next_queue; /* Queue from which buffer selected*/ + wait_queue_head_t waiting; /* Processes waiting on free bufs */ +} drm_device_dma_t; + +#if __REALLY_HAVE_AGP +typedef struct drm_agp_mem { + unsigned long handle; + agp_memory *memory; + unsigned long bound; /* address */ + int pages; + struct drm_agp_mem *prev; + struct drm_agp_mem *next; +} drm_agp_mem_t; + +typedef struct drm_agp_head { + agp_kern_info agp_info; + const char *chipset; + drm_agp_mem_t *memory; + unsigned long mode; + int enabled; + int acquired; + unsigned long base; + int agp_mtrr; +} drm_agp_head_t; +#endif + +typedef struct drm_sg_mem { + unsigned long handle; + void *virtual; + int pages; + struct page **pagelist; +} drm_sg_mem_t; + +typedef struct drm_sigdata { + int context; + drm_hw_lock_t *lock; +} drm_sigdata_t; + +typedef struct drm_map_list { + struct list_head head; + drm_map_t *map; +} drm_map_list_t; + +typedef struct drm_device { + const char *name; /* Simple driver name */ + char *unique; /* Unique identifier: e.g., busid */ + int unique_len; /* Length of unique field */ + dev_t device; /* Device number for mknod */ + char *devname; /* For /proc/interrupts */ + + int blocked; /* Blocked due to VC switch? */ + struct proc_dir_entry *root; /* Root for this device's entries */ + + /* Locks */ + spinlock_t count_lock; /* For inuse, open_count, buf_use */ + struct semaphore struct_sem; /* For others */ + + /* Usage Counters */ + int open_count; /* Outstanding files open */ + atomic_t ioctl_count; /* Outstanding IOCTLs pending */ + atomic_t vma_count; /* Outstanding vma areas open */ + int buf_use; /* Buffers in use -- cannot alloc */ + atomic_t buf_alloc; /* Buffer allocation in progress */ + + /* Performance counters */ + unsigned long counters; + drm_stat_type_t types[15]; + atomic_t counts[15]; + + /* Authentication */ + drm_file_t *file_first; + drm_file_t *file_last; + drm_magic_head_t magiclist[DRM_HASH_SIZE]; + + /* Memory management */ + drm_map_list_t *maplist; /* Linked list of regions */ + int map_count; /* Number of mappable regions */ + + drm_map_t **context_sareas; + int max_context; + + drm_vma_entry_t *vmalist; /* List of vmas (for debugging) */ + drm_lock_data_t lock; /* Information on hardware lock */ + + /* DMA queues (contexts) */ + int queue_count; /* Number of active DMA queues */ + int queue_reserved; /* Number of reserved DMA queues */ + int queue_slots; /* Actual length of queuelist */ + drm_queue_t **queuelist; /* Vector of pointers to DMA queues */ + drm_device_dma_t *dma; /* Optional pointer for DMA support */ + + /* Context support */ + int irq; /* Interrupt used by board */ + __volatile__ long context_flag; /* Context swapping flag */ + __volatile__ long interrupt_flag; /* Interruption handler flag */ + __volatile__ long dma_flag; /* DMA dispatch flag */ + struct timer_list timer; /* Timer for delaying ctx switch */ + wait_queue_head_t context_wait; /* Processes waiting on ctx switch */ + int last_checked; /* Last context checked for DMA */ + int last_context; /* Last current context */ + unsigned long last_switch; /* jiffies at last context switch */ + struct tq_struct tq; + cycles_t ctx_start; + cycles_t lck_start; +#if __HAVE_DMA_HISTOGRAM + drm_histogram_t histo; +#endif + + /* Callback to X server for context switch + and for heavy-handed reset. */ + char buf[DRM_BSZ]; /* Output buffer */ + char *buf_rp; /* Read pointer */ + char *buf_wp; /* Write pointer */ + char *buf_end; /* End pointer */ + struct fasync_struct *buf_async;/* Processes waiting for SIGIO */ + wait_queue_head_t buf_readers; /* Processes waiting to read */ + wait_queue_head_t buf_writers; /* Processes waiting to ctx switch */ + +#if __REALLY_HAVE_AGP + drm_agp_head_t *agp; +#endif +#ifdef __alpha__ +#if LINUX_VERSION_CODE < 0x020403 + struct pci_controler *hose; +#else + struct pci_controller *hose; +#endif +#endif + drm_sg_mem_t *sg; /* Scatter gather memory */ + unsigned long *ctx_bitmap; + void *dev_private; + drm_sigdata_t sigdata; /* For block_all_signals */ + sigset_t sigmask; +} drm_device_t; + + +/* ================================================================ + * Internal function definitions + */ + + /* Misc. support (drm_init.h) */ +extern int DRM(flags); +extern void DRM(parse_options)( char *s ); +extern int DRM(cpu_valid)( void ); + + /* Driver support (drm_drv.h) */ +extern int DRM(version)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int DRM(open)(struct inode *inode, struct file *filp); +extern int DRM(release)(struct inode *inode, struct file *filp); +extern int DRM(ioctl)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int DRM(lock)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int DRM(unlock)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); + + /* Device support (drm_fops.h) */ +extern int DRM(open_helper)(struct inode *inode, struct file *filp, + drm_device_t *dev); +extern int DRM(flush)(struct file *filp); +extern int DRM(release_fuck)(struct inode *inode, struct file *filp); +extern int DRM(fasync)(int fd, struct file *filp, int on); +extern ssize_t DRM(read)(struct file *filp, char *buf, size_t count, + loff_t *off); +extern int DRM(write_string)(drm_device_t *dev, const char *s); +extern unsigned int DRM(poll)(struct file *filp, + struct poll_table_struct *wait); + + /* Mapping support (drm_vm.h) */ +#if LINUX_VERSION_CODE < 0x020317 +extern unsigned long DRM(vm_nopage)(struct vm_area_struct *vma, + unsigned long address, + int write_access); +extern unsigned long DRM(vm_shm_nopage)(struct vm_area_struct *vma, + unsigned long address, + int write_access); +extern unsigned long DRM(vm_dma_nopage)(struct vm_area_struct *vma, + unsigned long address, + int write_access); +extern unsigned long DRM(vm_sg_nopage)(struct vm_area_struct *vma, + unsigned long address, + int write_access); +#else + /* Return type changed in 2.3.23 */ +extern struct page *DRM(vm_nopage)(struct vm_area_struct *vma, + unsigned long address, + int write_access); +extern struct page *DRM(vm_shm_nopage)(struct vm_area_struct *vma, + unsigned long address, + int write_access); +extern struct page *DRM(vm_dma_nopage)(struct vm_area_struct *vma, + unsigned long address, + int write_access); +extern struct page *DRM(vm_sg_nopage)(struct vm_area_struct *vma, + unsigned long address, + int write_access); +#endif +extern void DRM(vm_open)(struct vm_area_struct *vma); +extern void DRM(vm_close)(struct vm_area_struct *vma); +extern void DRM(vm_shm_close)(struct vm_area_struct *vma); +extern int DRM(mmap_dma)(struct file *filp, + struct vm_area_struct *vma); +extern int DRM(mmap)(struct file *filp, struct vm_area_struct *vma); + + /* Memory management support (drm_memory.h) */ +extern void DRM(mem_init)(void); +extern int DRM(mem_info)(char *buf, char **start, off_t offset, + int request, int *eof, void *data); +extern void *DRM(alloc)(size_t size, int area); +extern void *DRM(realloc)(void *oldpt, size_t oldsize, size_t size, + int area); +extern char *DRM(strdup)(const char *s, int area); +extern void DRM(strfree)(const char *s, int area); +extern void DRM(free)(void *pt, size_t size, int area); +extern unsigned long DRM(alloc_pages)(int order, int area); +extern void DRM(free_pages)(unsigned long address, int order, + int area); +extern void *DRM(ioremap)(unsigned long offset, unsigned long size); +extern void DRM(ioremapfree)(void *pt, unsigned long size); + +#if __REALLY_HAVE_AGP +extern agp_memory *DRM(alloc_agp)(int pages, u32 type); +extern int DRM(free_agp)(agp_memory *handle, int pages); +extern int DRM(bind_agp)(agp_memory *handle, unsigned int start); +extern int DRM(unbind_agp)(agp_memory *handle); +#endif + + /* Misc. IOCTL support (drm_ioctl.h) */ +extern int DRM(irq_busid)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int DRM(getunique)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int DRM(setunique)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int DRM(getmap)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int DRM(getclient)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int DRM(getstats)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); + + /* Context IOCTL support (drm_context.h) */ +extern int DRM(resctx)( struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg ); +extern int DRM(addctx)( struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg ); +extern int DRM(modctx)( struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg ); +extern int DRM(getctx)( struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg ); +extern int DRM(switchctx)( struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg ); +extern int DRM(newctx)( struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg ); +extern int DRM(rmctx)( struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg ); + +extern int DRM(context_switch)(drm_device_t *dev, int old, int new); +extern int DRM(context_switch_complete)(drm_device_t *dev, int new); + +#if __HAVE_CTX_BITMAP +extern int DRM(ctxbitmap_init)( drm_device_t *dev ); +extern void DRM(ctxbitmap_cleanup)( drm_device_t *dev ); +#endif + +extern int DRM(setsareactx)( struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg ); +extern int DRM(getsareactx)( struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg ); + + /* Drawable IOCTL support (drm_drawable.h) */ +extern int DRM(adddraw)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int DRM(rmdraw)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); + + + /* Authentication IOCTL support (drm_auth.h) */ +extern int DRM(add_magic)(drm_device_t *dev, drm_file_t *priv, + drm_magic_t magic); +extern int DRM(remove_magic)(drm_device_t *dev, drm_magic_t magic); +extern int DRM(getmagic)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int DRM(authmagic)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); + + + /* Locking IOCTL support (drm_lock.h) */ +extern int DRM(block)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int DRM(unblock)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int DRM(lock_take)(__volatile__ unsigned int *lock, + unsigned int context); +extern int DRM(lock_transfer)(drm_device_t *dev, + __volatile__ unsigned int *lock, + unsigned int context); +extern int DRM(lock_free)(drm_device_t *dev, + __volatile__ unsigned int *lock, + unsigned int context); +extern int DRM(finish)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int DRM(flush_unblock)(drm_device_t *dev, int context, + drm_lock_flags_t flags); +extern int DRM(flush_block_and_flush)(drm_device_t *dev, int context, + drm_lock_flags_t flags); +extern int DRM(notifier)(void *priv); + + /* Buffer management support (drm_bufs.h) */ +extern int DRM(order)( unsigned long size ); +extern int DRM(addmap)( struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg ); +extern int DRM(rmmap)( struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg ); +#if __HAVE_DMA +extern int DRM(addbufs)( struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg ); +extern int DRM(infobufs)( struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg ); +extern int DRM(markbufs)( struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg ); +extern int DRM(freebufs)( struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg ); +extern int DRM(mapbufs)( struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg ); + + /* DMA support (drm_dma.h) */ +extern int DRM(dma_setup)(drm_device_t *dev); +extern void DRM(dma_takedown)(drm_device_t *dev); +extern void DRM(free_buffer)(drm_device_t *dev, drm_buf_t *buf); +extern void DRM(reclaim_buffers)(drm_device_t *dev, pid_t pid); +#if __HAVE_OLD_DMA +/* GH: This is a dirty hack for now... + */ +extern void DRM(clear_next_buffer)(drm_device_t *dev); +extern int DRM(select_queue)(drm_device_t *dev, + void (*wrapper)(unsigned long)); +extern int DRM(dma_enqueue)(drm_device_t *dev, drm_dma_t *dma); +extern int DRM(dma_get_buffers)(drm_device_t *dev, drm_dma_t *dma); +#endif +#if __HAVE_DMA_IRQ +extern int DRM(control)( struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg ); +extern int DRM(irq_install)( drm_device_t *dev, int irq ); +extern int DRM(irq_uninstall)( drm_device_t *dev ); +extern void DRM(dma_service)( int irq, void *device, + struct pt_regs *regs ); +#if __HAVE_DMA_IRQ_BH +extern void DRM(dma_immediate_bh)( void *dev ); +#endif +#endif +#if DRM_DMA_HISTOGRAM +extern int DRM(histogram_slot)(unsigned long count); +extern void DRM(histogram_compute)(drm_device_t *dev, drm_buf_t *buf); +#endif + + /* Buffer list support (drm_lists.h) */ +#if __HAVE_DMA_WAITLIST +extern int DRM(waitlist_create)(drm_waitlist_t *bl, int count); +extern int DRM(waitlist_destroy)(drm_waitlist_t *bl); +extern int DRM(waitlist_put)(drm_waitlist_t *bl, drm_buf_t *buf); +extern drm_buf_t *DRM(waitlist_get)(drm_waitlist_t *bl); +#endif +#if __HAVE_DMA_FREELIST +extern int DRM(freelist_create)(drm_freelist_t *bl, int count); +extern int DRM(freelist_destroy)(drm_freelist_t *bl); +extern int DRM(freelist_put)(drm_device_t *dev, drm_freelist_t *bl, + drm_buf_t *buf); +extern drm_buf_t *DRM(freelist_get)(drm_freelist_t *bl, int block); +#endif +#endif /* __HAVE_DMA */ + +#if __REALLY_HAVE_AGP + /* AGP/GART support (drm_agpsupport.h) */ +extern drm_agp_head_t *DRM(agp_init)(void); +extern void DRM(agp_uninit)(void); +extern int DRM(agp_acquire)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern void DRM(agp_do_release)(void); +extern int DRM(agp_release)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int DRM(agp_enable)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int DRM(agp_info)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int DRM(agp_alloc)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int DRM(agp_free)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int DRM(agp_unbind)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int DRM(agp_bind)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern agp_memory *DRM(agp_allocate_memory)(size_t pages, u32 type); +extern int DRM(agp_free_memory)(agp_memory *handle); +extern int DRM(agp_bind_memory)(agp_memory *handle, off_t start); +extern int DRM(agp_unbind_memory)(agp_memory *handle); +#endif + + /* Stub support (drm_stub.h) */ +int DRM(stub_register)(const char *name, + struct file_operations *fops, + drm_device_t *dev); +int DRM(stub_unregister)(int minor); + + /* Proc support (drm_proc.h) */ +extern struct proc_dir_entry *DRM(proc_init)(drm_device_t *dev, + int minor, + struct proc_dir_entry *root, + struct proc_dir_entry **dev_root); +extern int DRM(proc_cleanup)(int minor, + struct proc_dir_entry *root, + struct proc_dir_entry *dev_root); + +#if __HAVE_SG + /* Scatter Gather Support (drm_scatter.h) */ +extern void DRM(sg_cleanup)(drm_sg_mem_t *entry); +extern int DRM(sg_alloc)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int DRM(sg_free)(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +#endif + + /* ATI PCIGART support (ati_pcigart.h) */ +extern unsigned long DRM(ati_pcigart_init)(drm_device_t *dev); +extern int DRM(ati_pcigart_cleanup)(unsigned long address); + +#endif /* __KERNEL__ */ +#endif diff -urpN linux-2.4.9-linus/drivers/char/drm/drm_agpsupport.h linux-2.4.9-larpage/drivers/char/drm/drm_agpsupport.h --- linux-2.4.9-linus/drivers/char/drm/drm_agpsupport.h 2001-08-15 14:21:50.000000000 -0700 +++ linux-2.4.9-larpage/drivers/char/drm/drm_agpsupport.h 2002-11-20 02:02:36.000000000 -0800 @@ -61,8 +61,8 @@ int DRM(agp_info)(struct inode *inode, s info.mode = kern->mode; info.aperture_base = kern->aper_base; info.aperture_size = kern->aper_size * 1024 * 1024; - info.memory_allowed = kern->max_memory << PAGE_SHIFT; - info.memory_used = kern->current_memory << PAGE_SHIFT; + info.memory_allowed = kern->max_memory * AGP_PAGE_SIZE; + info.memory_used = kern->current_memory * AGP_PAGE_SIZE; info.id_vendor = kern->device->vendor; info.id_device = kern->device->device; @@ -143,7 +143,7 @@ int DRM(agp_alloc)(struct inode *inode, memset(entry, 0, sizeof(*entry)); - pages = (request.size + PAGE_SIZE - 1) / PAGE_SIZE; + pages = (request.size + AGP_PAGE_SIZE - 1) / AGP_PAGE_SIZE; type = (u32) request.type; if (!(memory = DRM(alloc_agp)(pages, type))) { @@ -218,9 +218,9 @@ int DRM(agp_bind)(struct inode *inode, s if (!(entry = DRM(agp_lookup_entry)(dev, request.handle))) return -EINVAL; if (entry->bound) return -EINVAL; - page = (request.offset + PAGE_SIZE - 1) / PAGE_SIZE; + page = (request.offset + AGP_PAGE_SIZE - 1) / AGP_PAGE_SIZE; if ((retcode = DRM(bind_agp)(entry->memory, page))) return retcode; - entry->bound = dev->agp->base + (page << PAGE_SHIFT); + entry->bound = dev->agp->base + (page * AGP_PAGE_SIZE); DRM_DEBUG("base = 0x%lx entry->bound = 0x%lx\n", dev->agp->base, entry->bound); return 0; diff -urpN linux-2.4.9-linus/drivers/char/drm/drm_bufs.h linux-2.4.9-larpage/drivers/char/drm/drm_bufs.h --- linux-2.4.9-linus/drivers/char/drm/drm_bufs.h 2001-08-15 14:21:47.000000000 -0700 +++ linux-2.4.9-larpage/drivers/char/drm/drm_bufs.h 2002-11-20 02:02:36.000000000 -0800 @@ -97,7 +97,7 @@ int DRM(addmap)( struct inode *inode, st } DRM_DEBUG( "offset = 0x%08lx, size = 0x%08lx, type = %d\n", map->offset, map->size, map->type ); - if ( (map->offset & (~PAGE_MASK)) || (map->size & (~PAGE_MASK)) ) { + if ( (map->offset & (~MMUPAGE_MASK)) || (map->size & (~MMUPAGE_MASK)) ) { DRM(free)( map, sizeof(*map), DRM_MEM_MAPS ); return -EINVAL; } @@ -522,7 +522,7 @@ int DRM(addbufs_pci)( struct inode *inod if ( dev->queue_count ) return -EBUSY; /* Not while in use */ alignment = (request.flags & _DRM_PAGE_ALIGN) - ? PAGE_ALIGN(size) : size; + ? MMUPAGE_ALIGN(size) : size; page_order = order - PAGE_SHIFT > 0 ? order - PAGE_SHIFT : 0; total = PAGE_SIZE << page_order; diff -urpN linux-2.4.9-linus/drivers/char/drm/drm_context.h linux-2.4.9-larpage/drivers/char/drm/drm_context.h --- linux-2.4.9-linus/drivers/char/drm/drm_context.h 2001-08-15 14:21:47.000000000 -0700 +++ linux-2.4.9-larpage/drivers/char/drm/drm_context.h 2002-11-20 02:02:36.000000000 -0800 @@ -112,13 +112,13 @@ int DRM(ctxbitmap_init)( drm_device_t *d int temp; down(&dev->struct_sem); - dev->ctx_bitmap = (unsigned long *) DRM(alloc)( PAGE_SIZE, + dev->ctx_bitmap = (unsigned long *) DRM(alloc)( DRM_MAX_CTXBITMAP/8, DRM_MEM_CTXBITMAP ); if ( dev->ctx_bitmap == NULL ) { up(&dev->struct_sem); return -ENOMEM; } - memset( (void *)dev->ctx_bitmap, 0, PAGE_SIZE ); + memset( (void *)dev->ctx_bitmap, 0, DRM_MAX_CTXBITMAP/8 ); dev->context_sareas = NULL; dev->max_context = -1; up(&dev->struct_sem); @@ -138,7 +138,7 @@ void DRM(ctxbitmap_cleanup)( drm_device_ sizeof(*dev->context_sareas) * dev->max_context, DRM_MEM_MAPS ); - DRM(free)( (void *)dev->ctx_bitmap, PAGE_SIZE, DRM_MEM_CTXBITMAP ); + DRM(free)( (void *)dev->ctx_bitmap, DRM_MAX_CTXBITMAP/8, DRM_MEM_CTXBITMAP ); up(&dev->struct_sem); } diff -urpN linux-2.4.9-linus/drivers/char/drm/drm_memory.h linux-2.4.9-larpage/drivers/char/drm/drm_memory.h --- linux-2.4.9-linus/drivers/char/drm/drm_memory.h 2001-08-15 14:21:47.000000000 -0700 +++ linux-2.4.9-larpage/drivers/char/drm/drm_memory.h 2002-11-20 02:02:39.000000000 -0800 @@ -224,10 +224,14 @@ void DRM(free)(void *pt, size_t size, in unsigned long DRM(alloc_pages)(int order, int area) { unsigned long address; - unsigned long bytes = PAGE_SIZE << order; + unsigned long bytes; unsigned long addr; unsigned int sz; + if (order < 0) + order = 0; + bytes = PAGE_SIZE << order; + spin_lock(&DRM(mem_lock)); if ((DRM(ram_used) >> PAGE_SHIFT) > (DRM_RAM_PERCENT * DRM(ram_available)) / 100) { @@ -270,12 +274,16 @@ unsigned long DRM(alloc_pages)(int order void DRM(free_pages)(unsigned long address, int order, int area) { - unsigned long bytes = PAGE_SIZE << order; + unsigned long bytes; int alloc_count; int free_count; unsigned long addr; unsigned int sz; + if (order < 0) + order = 0; + bytes = PAGE_SIZE << order; + if (!address) { DRM_MEM_ERROR(area, "Attempt to free address 0\n"); } else { @@ -353,6 +361,7 @@ void DRM(ioremapfree)(void *pt, unsigned } #if __REALLY_HAVE_AGP +#include agp_memory *DRM(alloc_agp)(int pages, u32 type) { @@ -367,7 +376,7 @@ agp_memory *DRM(alloc_agp)(int pages, u3 spin_lock(&DRM(mem_lock)); ++DRM(mem_stats)[DRM_MEM_TOTALAGP].succeed_count; DRM(mem_stats)[DRM_MEM_TOTALAGP].bytes_allocated - += pages << PAGE_SHIFT; + += pages * AGP_PAGE_SIZE; spin_unlock(&DRM(mem_lock)); return handle; } @@ -394,7 +403,7 @@ int DRM(free_agp)(agp_memory *handle, in free_count = ++DRM(mem_stats)[DRM_MEM_TOTALAGP].free_count; alloc_count = DRM(mem_stats)[DRM_MEM_TOTALAGP].succeed_count; DRM(mem_stats)[DRM_MEM_TOTALAGP].bytes_freed - += pages << PAGE_SHIFT; + += pages * AGP_PAGE_SIZE; spin_unlock(&DRM(mem_lock)); if (free_count > alloc_count) { DRM_MEM_ERROR(DRM_MEM_TOTALAGP, @@ -420,7 +429,7 @@ int DRM(bind_agp)(agp_memory *handle, un spin_lock(&DRM(mem_lock)); ++DRM(mem_stats)[DRM_MEM_BOUNDAGP].succeed_count; DRM(mem_stats)[DRM_MEM_BOUNDAGP].bytes_allocated - += handle->page_count << PAGE_SHIFT; + += handle->page_count * AGP_PAGE_SIZE; spin_unlock(&DRM(mem_lock)); return retcode; } @@ -447,7 +456,7 @@ int DRM(unbind_agp)(agp_memory *handle) free_count = ++DRM(mem_stats)[DRM_MEM_BOUNDAGP].free_count; alloc_count = DRM(mem_stats)[DRM_MEM_BOUNDAGP].succeed_count; DRM(mem_stats)[DRM_MEM_BOUNDAGP].bytes_freed - += handle->page_count << PAGE_SHIFT; + += handle->page_count * AGP_PAGE_SIZE; spin_unlock(&DRM(mem_lock)); if (free_count > alloc_count) { DRM_MEM_ERROR(DRM_MEM_BOUNDAGP, diff -urpN linux-2.4.9-linus/drivers/char/drm/drm_proc.h linux-2.4.9-larpage/drivers/char/drm/drm_proc.h --- linux-2.4.9-linus/drivers/char/drm/drm_proc.h 2001-08-15 14:21:47.000000000 -0700 +++ linux-2.4.9-larpage/drivers/char/drm/drm_proc.h 2002-11-20 02:02:39.000000000 -0800 @@ -445,15 +445,17 @@ static int DRM(_vma_info)(char *buf, cha pgprot & _PAGE_GLOBAL ? 'g' : 'l' ); #endif DRM_PROC_PRINT("\n"); -#if 0 - for (i = vma->vm_start; i < vma->vm_end; i += PAGE_SIZE) { +#if DRM_VMA_VERBOSE + for (i = vma->vm_start; i < vma->vm_end; i += MMUPAGE_SIZE) { pgd = pgd_offset(vma->vm_mm, i); pmd = pmd_offset(pgd, i); pte = pte_offset(pmd, i); if (pte_present(*pte)) { - address = __pa(pte_page(*pte)) - + (i & (PAGE_SIZE-1)); - DRM_PROC_PRINT(" 0x%08lx -> 0x%08lx" + /* Show up to 64GB: too i386-centric? */ + address = pte_page(*pte) - mem_map; + address <<= PAGE_SHIFT - 12; + address += pte_suboffset(*pte) >> 12; + DRM_PROC_PRINT(" 0x%08lx -> 0x%06lx000" " %c%c%c%c%c\n", i, address, diff -urpN linux-2.4.9-linus/drivers/char/drm/drm_proc.h.orig linux-2.4.9-larpage/drivers/char/drm/drm_proc.h.orig --- linux-2.4.9-linus/drivers/char/drm/drm_proc.h.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/drivers/char/drm/drm_proc.h.orig 2001-08-15 14:21:47.000000000 -0700 @@ -0,0 +1,630 @@ +/* drm_proc.h -- /proc support for DRM -*- linux-c -*- + * Created: Mon Jan 11 09:48:47 1999 by faith@valinux.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice (including the next + * paragraph) shall be included in all copies or substantial portions of the + * Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: + * Rickard E. (Rik) Faith + * Gareth Hughes + * + * Acknowledgements: + * Matthew J Sottek sent in a patch to fix + * the problem with the proc files not outputting all their information. + */ + +#define __NO_VERSION__ +#include "drmP.h" + +static int DRM(name_info)(char *buf, char **start, off_t offset, + int request, int *eof, void *data); +static int DRM(vm_info)(char *buf, char **start, off_t offset, + int request, int *eof, void *data); +static int DRM(clients_info)(char *buf, char **start, off_t offset, + int request, int *eof, void *data); +static int DRM(queues_info)(char *buf, char **start, off_t offset, + int request, int *eof, void *data); +static int DRM(bufs_info)(char *buf, char **start, off_t offset, + int request, int *eof, void *data); +#if DRM_DEBUG_CODE +static int DRM(vma_info)(char *buf, char **start, off_t offset, + int request, int *eof, void *data); +#endif +#if __HAVE_DMA_HISTOGRAM +static int DRM(histo_info)(char *buf, char **start, off_t offset, + int request, int *eof, void *data); +#endif + +struct drm_proc_list { + const char *name; + int (*f)(char *, char **, off_t, int, int *, void *); +} DRM(proc_list)[] = { + { "name", DRM(name_info) }, + { "mem", DRM(mem_info) }, + { "vm", DRM(vm_info) }, + { "clients", DRM(clients_info) }, + { "queues", DRM(queues_info) }, + { "bufs", DRM(bufs_info) }, +#if DRM_DEBUG_CODE + { "vma", DRM(vma_info) }, +#endif +#if __HAVE_DMA_HISTOGRAM + { "histo", DRM(histo_info) }, +#endif +}; +#define DRM_PROC_ENTRIES (sizeof(DRM(proc_list))/sizeof(DRM(proc_list)[0])) + +struct proc_dir_entry *DRM(proc_init)(drm_device_t *dev, int minor, + struct proc_dir_entry *root, + struct proc_dir_entry **dev_root) +{ + struct proc_dir_entry *ent; + int i, j; + char name[64]; + + if (!minor) root = create_proc_entry("dri", S_IFDIR, NULL); + if (!root) { + DRM_ERROR("Cannot create /proc/dri\n"); + return NULL; + } + + sprintf(name, "%d", minor); + *dev_root = create_proc_entry(name, S_IFDIR, root); + if (!*dev_root) { + DRM_ERROR("Cannot create /proc/%s\n", name); + return NULL; + } + + for (i = 0; i < DRM_PROC_ENTRIES; i++) { + ent = create_proc_entry(DRM(proc_list)[i].name, + S_IFREG|S_IRUGO, *dev_root); + if (!ent) { + DRM_ERROR("Cannot create /proc/dri/%s/%s\n", + name, DRM(proc_list)[i].name); + for (j = 0; j < i; j++) + remove_proc_entry(DRM(proc_list)[i].name, + *dev_root); + remove_proc_entry(name, root); + if (!minor) remove_proc_entry("dri", NULL); + return NULL; + } + ent->read_proc = DRM(proc_list)[i].f; + ent->data = dev; + } + + return root; +} + + +int DRM(proc_cleanup)(int minor, struct proc_dir_entry *root, + struct proc_dir_entry *dev_root) +{ + int i; + char name[64]; + + if (!root || !dev_root) return 0; + + for (i = 0; i < DRM_PROC_ENTRIES; i++) + remove_proc_entry(DRM(proc_list)[i].name, dev_root); + sprintf(name, "%d", minor); + remove_proc_entry(name, root); + if (!minor) remove_proc_entry("dri", NULL); + + return 0; +} + +static int DRM(name_info)(char *buf, char **start, off_t offset, int request, + int *eof, void *data) +{ + drm_device_t *dev = (drm_device_t *)data; + int len = 0; + + if (offset > DRM_PROC_LIMIT) { + *eof = 1; + return 0; + } + + *start = &buf[offset]; + *eof = 0; + + if (dev->unique) { + DRM_PROC_PRINT("%s 0x%x %s\n", + dev->name, dev->device, dev->unique); + } else { + DRM_PROC_PRINT("%s 0x%x\n", dev->name, dev->device); + } + + if (len > request + offset) return request; + *eof = 1; + return len - offset; +} + +static int DRM(_vm_info)(char *buf, char **start, off_t offset, int request, + int *eof, void *data) +{ + drm_device_t *dev = (drm_device_t *)data; + int len = 0; + drm_map_t *map; + drm_map_list_t *r_list; + struct list_head *list; + + /* Hardcoded from _DRM_FRAME_BUFFER, + _DRM_REGISTERS, _DRM_SHM, and + _DRM_AGP. */ + const char *types[] = { "FB", "REG", "SHM", "AGP" }; + const char *type; + int i; + + if (offset > DRM_PROC_LIMIT) { + *eof = 1; + return 0; + } + + *start = &buf[offset]; + *eof = 0; + + DRM_PROC_PRINT("slot offset size type flags " + "address mtrr\n\n"); + i = 0; + list_for_each(list, &dev->maplist->head) { + r_list = (drm_map_list_t *)list; + map = r_list->map; + if(!map) continue; + if (map->type < 0 || map->type > 3) type = "??"; + else type = types[map->type]; + DRM_PROC_PRINT("%4d 0x%08lx 0x%08lx %4.4s 0x%02x 0x%08lx ", + i, + map->offset, + map->size, + type, + map->flags, + (unsigned long)map->handle); + if (map->mtrr < 0) { + DRM_PROC_PRINT("none\n"); + } else { + DRM_PROC_PRINT("%4d\n", map->mtrr); + } + i++; + } + + if (len > request + offset) return request; + *eof = 1; + return len - offset; +} + +static int DRM(vm_info)(char *buf, char **start, off_t offset, int request, + int *eof, void *data) +{ + drm_device_t *dev = (drm_device_t *)data; + int ret; + + down(&dev->struct_sem); + ret = DRM(_vm_info)(buf, start, offset, request, eof, data); + up(&dev->struct_sem); + return ret; +} + + +static int DRM(_queues_info)(char *buf, char **start, off_t offset, + int request, int *eof, void *data) +{ + drm_device_t *dev = (drm_device_t *)data; + int len = 0; + int i; + drm_queue_t *q; + + if (offset > DRM_PROC_LIMIT) { + *eof = 1; + return 0; + } + + *start = &buf[offset]; + *eof = 0; + + DRM_PROC_PRINT(" ctx/flags use fin" + " blk/rw/rwf wait flushed queued" + " locks\n\n"); + for (i = 0; i < dev->queue_count; i++) { + q = dev->queuelist[i]; + atomic_inc(&q->use_count); + DRM_PROC_PRINT_RET(atomic_dec(&q->use_count), + "%5d/0x%03x %5d %5d" + " %5d/%c%c/%c%c%c %5Zd\n", + i, + q->flags, + atomic_read(&q->use_count), + atomic_read(&q->finalization), + atomic_read(&q->block_count), + atomic_read(&q->block_read) ? 'r' : '-', + atomic_read(&q->block_write) ? 'w' : '-', + waitqueue_active(&q->read_queue) ? 'r':'-', + waitqueue_active(&q->write_queue) ? 'w':'-', + waitqueue_active(&q->flush_queue) ? 'f':'-', + DRM_BUFCOUNT(&q->waitlist)); + atomic_dec(&q->use_count); + } + + if (len > request + offset) return request; + *eof = 1; + return len - offset; +} + +static int DRM(queues_info)(char *buf, char **start, off_t offset, int request, + int *eof, void *data) +{ + drm_device_t *dev = (drm_device_t *)data; + int ret; + + down(&dev->struct_sem); + ret = DRM(_queues_info)(buf, start, offset, request, eof, data); + up(&dev->struct_sem); + return ret; +} + +/* drm_bufs_info is called whenever a process reads + /dev/dri//bufs. */ + +static int DRM(_bufs_info)(char *buf, char **start, off_t offset, int request, + int *eof, void *data) +{ + drm_device_t *dev = (drm_device_t *)data; + int len = 0; + drm_device_dma_t *dma = dev->dma; + int i; + + if (!dma || offset > DRM_PROC_LIMIT) { + *eof = 1; + return 0; + } + + *start = &buf[offset]; + *eof = 0; + + DRM_PROC_PRINT(" o size count free segs pages kB\n\n"); + for (i = 0; i <= DRM_MAX_ORDER; i++) { + if (dma->bufs[i].buf_count) + DRM_PROC_PRINT("%2d %8d %5d %5d %5d %5d %5ld\n", + i, + dma->bufs[i].buf_size, + dma->bufs[i].buf_count, + atomic_read(&dma->bufs[i] + .freelist.count), + dma->bufs[i].seg_count, + dma->bufs[i].seg_count + *(1 << dma->bufs[i].page_order), + (dma->bufs[i].seg_count + * (1 << dma->bufs[i].page_order)) + * PAGE_SIZE / 1024); + } + DRM_PROC_PRINT("\n"); + for (i = 0; i < dma->buf_count; i++) { + if (i && !(i%32)) DRM_PROC_PRINT("\n"); + DRM_PROC_PRINT(" %d", dma->buflist[i]->list); + } + DRM_PROC_PRINT("\n"); + + if (len > request + offset) return request; + *eof = 1; + return len - offset; +} + +static int DRM(bufs_info)(char *buf, char **start, off_t offset, int request, + int *eof, void *data) +{ + drm_device_t *dev = (drm_device_t *)data; + int ret; + + down(&dev->struct_sem); + ret = DRM(_bufs_info)(buf, start, offset, request, eof, data); + up(&dev->struct_sem); + return ret; +} + + +static int DRM(_clients_info)(char *buf, char **start, off_t offset, + int request, int *eof, void *data) +{ + drm_device_t *dev = (drm_device_t *)data; + int len = 0; + drm_file_t *priv; + + if (offset > DRM_PROC_LIMIT) { + *eof = 1; + return 0; + } + + *start = &buf[offset]; + *eof = 0; + + DRM_PROC_PRINT("a dev pid uid magic ioctls\n\n"); + for (priv = dev->file_first; priv; priv = priv->next) { + DRM_PROC_PRINT("%c %3d %5d %5d %10u %10lu\n", + priv->authenticated ? 'y' : 'n', + priv->minor, + priv->pid, + priv->uid, + priv->magic, + priv->ioctl_count); + } + + if (len > request + offset) return request; + *eof = 1; + return len - offset; +} + +static int DRM(clients_info)(char *buf, char **start, off_t offset, + int request, int *eof, void *data) +{ + drm_device_t *dev = (drm_device_t *)data; + int ret; + + down(&dev->struct_sem); + ret = DRM(_clients_info)(buf, start, offset, request, eof, data); + up(&dev->struct_sem); + return ret; +} + +#if DRM_DEBUG_CODE + +#define DRM_VMA_VERBOSE 0 + +static int DRM(_vma_info)(char *buf, char **start, off_t offset, int request, + int *eof, void *data) +{ + drm_device_t *dev = (drm_device_t *)data; + int len = 0; + drm_vma_entry_t *pt; + struct vm_area_struct *vma; +#if DRM_VMA_VERBOSE + unsigned long i; + unsigned long address; + pgd_t *pgd; + pmd_t *pmd; + pte_t *pte; +#endif +#if defined(__i386__) + unsigned int pgprot; +#endif + + if (offset > DRM_PROC_LIMIT) { + *eof = 1; + return 0; + } + + *start = &buf[offset]; + *eof = 0; + + DRM_PROC_PRINT("vma use count: %d, high_memory = %p, 0x%08lx\n", + atomic_read(&dev->vma_count), + high_memory, virt_to_phys(high_memory)); + for (pt = dev->vmalist; pt; pt = pt->next) { + if (!(vma = pt->vma)) continue; + DRM_PROC_PRINT("\n%5d 0x%08lx-0x%08lx %c%c%c%c%c%c 0x%08lx", + pt->pid, + vma->vm_start, + vma->vm_end, + vma->vm_flags & VM_READ ? 'r' : '-', + vma->vm_flags & VM_WRITE ? 'w' : '-', + vma->vm_flags & VM_EXEC ? 'x' : '-', + vma->vm_flags & VM_MAYSHARE ? 's' : 'p', + vma->vm_flags & VM_LOCKED ? 'l' : '-', + vma->vm_flags & VM_IO ? 'i' : '-', + VM_OFFSET(vma)); + +#if defined(__i386__) + pgprot = pgprot_val(vma->vm_page_prot); + DRM_PROC_PRINT(" %c%c%c%c%c%c%c%c%c", + pgprot & _PAGE_PRESENT ? 'p' : '-', + pgprot & _PAGE_RW ? 'w' : 'r', + pgprot & _PAGE_USER ? 'u' : 's', + pgprot & _PAGE_PWT ? 't' : 'b', + pgprot & _PAGE_PCD ? 'u' : 'c', + pgprot & _PAGE_ACCESSED ? 'a' : '-', + pgprot & _PAGE_DIRTY ? 'd' : '-', + pgprot & _PAGE_PSE ? 'm' : 'k', + pgprot & _PAGE_GLOBAL ? 'g' : 'l' ); +#endif + DRM_PROC_PRINT("\n"); +#if 0 + for (i = vma->vm_start; i < vma->vm_end; i += PAGE_SIZE) { + pgd = pgd_offset(vma->vm_mm, i); + pmd = pmd_offset(pgd, i); + pte = pte_offset(pmd, i); + if (pte_present(*pte)) { + address = __pa(pte_page(*pte)) + + (i & (PAGE_SIZE-1)); + DRM_PROC_PRINT(" 0x%08lx -> 0x%08lx" + " %c%c%c%c%c\n", + i, + address, + pte_read(*pte) ? 'r' : '-', + pte_write(*pte) ? 'w' : '-', + pte_exec(*pte) ? 'x' : '-', + pte_dirty(*pte) ? 'd' : '-', + pte_young(*pte) ? 'a' : '-' ); + } else { + DRM_PROC_PRINT(" 0x%08lx\n", i); + } + } +#endif + } + + if (len > request + offset) return request; + *eof = 1; + return len - offset; +} + +static int DRM(vma_info)(char *buf, char **start, off_t offset, int request, + int *eof, void *data) +{ + drm_device_t *dev = (drm_device_t *)data; + int ret; + + down(&dev->struct_sem); + ret = DRM(_vma_info)(buf, start, offset, request, eof, data); + up(&dev->struct_sem); + return ret; +} +#endif + + +#if __HAVE_DMA_HISTOGRAM +static int DRM(_histo_info)(char *buf, char **start, off_t offset, int request, + int *eof, void *data) +{ + drm_device_t *dev = (drm_device_t *)data; + int len = 0; + drm_device_dma_t *dma = dev->dma; + int i; + unsigned long slot_value = DRM_DMA_HISTOGRAM_INITIAL; + unsigned long prev_value = 0; + drm_buf_t *buffer; + + if (offset > DRM_PROC_LIMIT) { + *eof = 1; + return 0; + } + + *start = &buf[offset]; + *eof = 0; + + DRM_PROC_PRINT("general statistics:\n"); + DRM_PROC_PRINT("total %10u\n", atomic_read(&dev->histo.total)); + DRM_PROC_PRINT("open %10u\n", + atomic_read(&dev->counts[_DRM_STAT_OPENS])); + DRM_PROC_PRINT("close %10u\n", + atomic_read(&dev->counts[_DRM_STAT_CLOSES])); + DRM_PROC_PRINT("ioctl %10u\n", + atomic_read(&dev->counts[_DRM_STAT_IOCTLS])); + + DRM_PROC_PRINT("\nlock statistics:\n"); + DRM_PROC_PRINT("locks %10u\n", + atomic_read(&dev->counts[_DRM_STAT_LOCKS])); + DRM_PROC_PRINT("unlocks %10u\n", + atomic_read(&dev->counts[_DRM_STAT_UNLOCKS])); + + if (dma) { +#if 0 + DRM_PROC_PRINT("\ndma statistics:\n"); + DRM_PROC_PRINT("prio %10u\n", + atomic_read(&dma->total_prio)); + DRM_PROC_PRINT("bytes %10u\n", + atomic_read(&dma->total_bytes)); + DRM_PROC_PRINT("dmas %10u\n", + atomic_read(&dma->total_dmas)); + DRM_PROC_PRINT("missed:\n"); + DRM_PROC_PRINT(" dma %10u\n", + atomic_read(&dma->total_missed_dma)); + DRM_PROC_PRINT(" lock %10u\n", + atomic_read(&dma->total_missed_lock)); + DRM_PROC_PRINT(" free %10u\n", + atomic_read(&dma->total_missed_free)); + DRM_PROC_PRINT(" sched %10u\n", + atomic_read(&dma->total_missed_sched)); + DRM_PROC_PRINT("tried %10u\n", + atomic_read(&dma->total_tried)); + DRM_PROC_PRINT("hit %10u\n", + atomic_read(&dma->total_hit)); + DRM_PROC_PRINT("lost %10u\n", + atomic_read(&dma->total_lost)); +#endif + + buffer = dma->next_buffer; + if (buffer) { + DRM_PROC_PRINT("next_buffer %7d\n", buffer->idx); + } else { + DRM_PROC_PRINT("next_buffer none\n"); + } + buffer = dma->this_buffer; + if (buffer) { + DRM_PROC_PRINT("this_buffer %7d\n", buffer->idx); + } else { + DRM_PROC_PRINT("this_buffer none\n"); + } + } + + + DRM_PROC_PRINT("\nvalues:\n"); + if (dev->lock.hw_lock) { + DRM_PROC_PRINT("lock 0x%08x\n", + dev->lock.hw_lock->lock); + } else { + DRM_PROC_PRINT("lock none\n"); + } + DRM_PROC_PRINT("context_flag 0x%08lx\n", dev->context_flag); + DRM_PROC_PRINT("interrupt_flag 0x%08lx\n", dev->interrupt_flag); + DRM_PROC_PRINT("dma_flag 0x%08lx\n", dev->dma_flag); + + DRM_PROC_PRINT("queue_count %10d\n", dev->queue_count); + DRM_PROC_PRINT("last_context %10d\n", dev->last_context); + DRM_PROC_PRINT("last_switch %10lu\n", dev->last_switch); + DRM_PROC_PRINT("last_checked %10d\n", dev->last_checked); + + + DRM_PROC_PRINT("\n q2d d2c c2f" + " q2c q2f dma sch" + " ctx lacq lhld\n\n"); + for (i = 0; i < DRM_DMA_HISTOGRAM_SLOTS; i++) { + DRM_PROC_PRINT("%s %10lu %10u %10u %10u %10u %10u" + " %10u %10u %10u %10u %10u\n", + i == DRM_DMA_HISTOGRAM_SLOTS - 1 ? ">=" : "< ", + i == DRM_DMA_HISTOGRAM_SLOTS - 1 + ? prev_value : slot_value , + + atomic_read(&dev->histo + .queued_to_dispatched[i]), + atomic_read(&dev->histo + .dispatched_to_completed[i]), + atomic_read(&dev->histo + .completed_to_freed[i]), + + atomic_read(&dev->histo + .queued_to_completed[i]), + atomic_read(&dev->histo + .queued_to_freed[i]), + atomic_read(&dev->histo.dma[i]), + atomic_read(&dev->histo.schedule[i]), + atomic_read(&dev->histo.ctx[i]), + atomic_read(&dev->histo.lacq[i]), + atomic_read(&dev->histo.lhld[i])); + prev_value = slot_value; + slot_value = DRM_DMA_HISTOGRAM_NEXT(slot_value); + } + + if (len > request + offset) return request; + *eof = 1; + return len - offset; +} + +static int DRM(histo_info)(char *buf, char **start, off_t offset, int request, + int *eof, void *data) +{ + drm_device_t *dev = (drm_device_t *)data; + int ret; + + down(&dev->struct_sem); + ret = DRM(_histo_info)(buf, start, offset, request, eof, data); + up(&dev->struct_sem); + return ret; +} +#endif diff -urpN linux-2.4.9-linus/drivers/char/drm/drm_vm.h linux-2.4.9-larpage/drivers/char/drm/drm_vm.h --- linux-2.4.9-linus/drivers/char/drm/drm_vm.h 2001-08-15 14:21:47.000000000 -0700 +++ linux-2.4.9-larpage/drivers/char/drm/drm_vm.h 2002-11-20 02:02:39.000000000 -0800 @@ -350,7 +350,7 @@ int DRM(mmap_dma)(struct file *filp, str vma->vm_start, vma->vm_end, VM_OFFSET(vma)); /* Length must match exact page count */ - if (!dma || (length >> PAGE_SHIFT) != dma->page_count) { + if (!dma || ((length + PAGE_SIZE - 1) >> PAGE_SHIFT) != dma->page_count) { unlock_kernel(); return -EINVAL; } diff -urpN linux-2.4.9-linus/drivers/char/drm/drm_vm.h.orig linux-2.4.9-larpage/drivers/char/drm/drm_vm.h.orig --- linux-2.4.9-linus/drivers/char/drm/drm_vm.h.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/drivers/char/drm/drm_vm.h.orig 2001-08-15 14:21:47.000000000 -0700 @@ -0,0 +1,501 @@ +/* drm_vm.h -- Memory mapping for DRM -*- linux-c -*- + * Created: Mon Jan 4 08:58:31 1999 by faith@valinux.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice (including the next + * paragraph) shall be included in all copies or substantial portions of the + * Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: + * Rickard E. (Rik) Faith + * Gareth Hughes + */ + +#define __NO_VERSION__ +#include "drmP.h" + +struct vm_operations_struct DRM(vm_ops) = { + nopage: DRM(vm_nopage), + open: DRM(vm_open), + close: DRM(vm_close), +}; + +struct vm_operations_struct DRM(vm_shm_ops) = { + nopage: DRM(vm_shm_nopage), + open: DRM(vm_open), + close: DRM(vm_shm_close), +}; + +struct vm_operations_struct DRM(vm_dma_ops) = { + nopage: DRM(vm_dma_nopage), + open: DRM(vm_open), + close: DRM(vm_close), +}; + +struct vm_operations_struct DRM(vm_sg_ops) = { + nopage: DRM(vm_sg_nopage), + open: DRM(vm_open), + close: DRM(vm_close), +}; + +#if LINUX_VERSION_CODE < 0x020317 +unsigned long DRM(vm_nopage)(struct vm_area_struct *vma, + unsigned long address, + int write_access) +#else + /* Return type changed in 2.3.23 */ +struct page *DRM(vm_nopage)(struct vm_area_struct *vma, + unsigned long address, + int write_access) +#endif +{ + return NOPAGE_SIGBUS; /* Disallow mremap */ +} + +#if LINUX_VERSION_CODE < 0x020317 +unsigned long DRM(vm_shm_nopage)(struct vm_area_struct *vma, + unsigned long address, + int write_access) +#else + /* Return type changed in 2.3.23 */ +struct page *DRM(vm_shm_nopage)(struct vm_area_struct *vma, + unsigned long address, + int write_access) +#endif +{ +#if LINUX_VERSION_CODE >= 0x020300 + drm_map_t *map = (drm_map_t *)vma->vm_private_data; +#else + drm_map_t *map = (drm_map_t *)vma->vm_pte; +#endif + unsigned long offset; + unsigned long i; + pgd_t *pgd; + pmd_t *pmd; + pte_t *pte; + struct page *page; + + if (address > vma->vm_end) return NOPAGE_SIGBUS; /* Disallow mremap */ + if (!map) return NOPAGE_OOM; /* Nothing allocated */ + + offset = address - vma->vm_start; + i = (unsigned long)map->handle + offset; + /* We have to walk page tables here because we need large SAREA's, and + * they need to be virtually contiguous in kernel space. + */ + pgd = pgd_offset_k( i ); + if( !pgd_present( *pgd ) ) return NOPAGE_OOM; + pmd = pmd_offset( pgd, i ); + if( !pmd_present( *pmd ) ) return NOPAGE_OOM; + pte = pte_offset( pmd, i ); + if( !pte_present( *pte ) ) return NOPAGE_OOM; + + page = pte_page(*pte); + get_page(page); + + DRM_DEBUG("0x%08lx => 0x%08x\n", address, page_to_bus(page)); +#if LINUX_VERSION_CODE < 0x020317 + return page_address(page); +#else + return page; +#endif +} + +/* Special close routine which deletes map information if we are the last + * person to close a mapping and its not in the global maplist. + */ + +void DRM(vm_shm_close)(struct vm_area_struct *vma) +{ + drm_file_t *priv = vma->vm_file->private_data; + drm_device_t *dev = priv->dev; + drm_vma_entry_t *pt, *prev, *next; + drm_map_t *map; + drm_map_list_t *r_list; + struct list_head *list; + int found_maps = 0; + + DRM_DEBUG("0x%08lx,0x%08lx\n", + vma->vm_start, vma->vm_end - vma->vm_start); +#if LINUX_VERSION_CODE < 0x020333 + MOD_DEC_USE_COUNT; /* Needed before Linux 2.3.51 */ +#endif + atomic_dec(&dev->vma_count); + +#if LINUX_VERSION_CODE >= 0x020300 + map = vma->vm_private_data; +#else + map = vma->vm_pte; +#endif + + down(&dev->struct_sem); + for (pt = dev->vmalist, prev = NULL; pt; pt = next) { + next = pt->next; +#if LINUX_VERSION_CODE >= 0x020300 + if (pt->vma->vm_private_data == map) found_maps++; +#else + if (pt->vma->vm_pte == map) found_maps++; +#endif + if (pt->vma == vma) { + if (prev) { + prev->next = pt->next; + } else { + dev->vmalist = pt->next; + } + DRM(free)(pt, sizeof(*pt), DRM_MEM_VMAS); + } else { + prev = pt; + } + } + /* We were the only map that was found */ + if(found_maps == 1 && + map->flags & _DRM_REMOVABLE) { + /* Check to see if we are in the maplist, if we are not, then + * we delete this mappings information. + */ + found_maps = 0; + list = &dev->maplist->head; + list_for_each(list, &dev->maplist->head) { + r_list = (drm_map_list_t *) list; + if (r_list->map == map) found_maps++; + } + + if(!found_maps) { + switch (map->type) { + case _DRM_REGISTERS: + case _DRM_FRAME_BUFFER: +#if __REALLY_HAVE_MTRR + if (map->mtrr >= 0) { + int retcode; + retcode = mtrr_del(map->mtrr, + map->offset, + map->size); + DRM_DEBUG("mtrr_del = %d\n", retcode); + } +#endif + DRM(ioremapfree)(map->handle, map->size); + break; + case _DRM_SHM: + vfree(map->handle); + break; + case _DRM_AGP: + case _DRM_SCATTER_GATHER: + break; + } + DRM(free)(map, sizeof(*map), DRM_MEM_MAPS); + } + } + up(&dev->struct_sem); +} + +#if LINUX_VERSION_CODE < 0x020317 +unsigned long DRM(vm_dma_nopage)(struct vm_area_struct *vma, + unsigned long address, + int write_access) +#else + /* Return type changed in 2.3.23 */ +struct page *DRM(vm_dma_nopage)(struct vm_area_struct *vma, + unsigned long address, + int write_access) +#endif +{ + drm_file_t *priv = vma->vm_file->private_data; + drm_device_t *dev = priv->dev; + drm_device_dma_t *dma = dev->dma; + unsigned long physical; + unsigned long offset; + unsigned long page; + + if (!dma) return NOPAGE_SIGBUS; /* Error */ + if (address > vma->vm_end) return NOPAGE_SIGBUS; /* Disallow mremap */ + if (!dma->pagelist) return NOPAGE_OOM ; /* Nothing allocated */ + + offset = address - vma->vm_start; /* vm_[pg]off[set] should be 0 */ + page = offset >> PAGE_SHIFT; + physical = dma->pagelist[page] + (offset & (~PAGE_MASK)); + atomic_inc(&virt_to_page(physical)->count); /* Dec. by kernel */ + + DRM_DEBUG("0x%08lx (page %lu) => 0x%08lx\n", address, page, physical); +#if LINUX_VERSION_CODE < 0x020317 + return physical; +#else + return virt_to_page(physical); +#endif +} + +#if LINUX_VERSION_CODE < 0x020317 +unsigned long DRM(vm_sg_nopage)(struct vm_area_struct *vma, + unsigned long address, + int write_access) +#else + /* Return type changed in 2.3.23 */ +struct page *DRM(vm_sg_nopage)(struct vm_area_struct *vma, + unsigned long address, + int write_access) +#endif +{ +#if LINUX_VERSION_CODE >= 0x020300 + drm_map_t *map = (drm_map_t *)vma->vm_private_data; +#else + drm_map_t *map = (drm_map_t *)vma->vm_pte; +#endif + drm_file_t *priv = vma->vm_file->private_data; + drm_device_t *dev = priv->dev; + drm_sg_mem_t *entry = dev->sg; + unsigned long offset; + unsigned long map_offset; + unsigned long page_offset; + struct page *page; + + if (!entry) return NOPAGE_SIGBUS; /* Error */ + if (address > vma->vm_end) return NOPAGE_SIGBUS; /* Disallow mremap */ + if (!entry->pagelist) return NOPAGE_OOM ; /* Nothing allocated */ + + + offset = address - vma->vm_start; + map_offset = map->offset - dev->sg->handle; + page_offset = (offset >> PAGE_SHIFT) + (map_offset >> PAGE_SHIFT); + page = entry->pagelist[page_offset]; + atomic_inc(&page->count); /* Dec. by kernel */ + +#if LINUX_VERSION_CODE < 0x020317 + return (unsigned long)virt_to_phys(page->virtual); +#else + return page; +#endif +} + +void DRM(vm_open)(struct vm_area_struct *vma) +{ + drm_file_t *priv = vma->vm_file->private_data; + drm_device_t *dev = priv->dev; + drm_vma_entry_t *vma_entry; + + DRM_DEBUG("0x%08lx,0x%08lx\n", + vma->vm_start, vma->vm_end - vma->vm_start); + atomic_inc(&dev->vma_count); +#if LINUX_VERSION_CODE < 0x020333 + /* The map can exist after the fd is closed. */ + MOD_INC_USE_COUNT; /* Needed before Linux 2.3.51 */ +#endif + + vma_entry = DRM(alloc)(sizeof(*vma_entry), DRM_MEM_VMAS); + if (vma_entry) { + down(&dev->struct_sem); + vma_entry->vma = vma; + vma_entry->next = dev->vmalist; + vma_entry->pid = current->pid; + dev->vmalist = vma_entry; + up(&dev->struct_sem); + } +} + +void DRM(vm_close)(struct vm_area_struct *vma) +{ + drm_file_t *priv = vma->vm_file->private_data; + drm_device_t *dev = priv->dev; + drm_vma_entry_t *pt, *prev; + + DRM_DEBUG("0x%08lx,0x%08lx\n", + vma->vm_start, vma->vm_end - vma->vm_start); +#if LINUX_VERSION_CODE < 0x020333 + MOD_DEC_USE_COUNT; /* Needed before Linux 2.3.51 */ +#endif + atomic_dec(&dev->vma_count); + + down(&dev->struct_sem); + for (pt = dev->vmalist, prev = NULL; pt; prev = pt, pt = pt->next) { + if (pt->vma == vma) { + if (prev) { + prev->next = pt->next; + } else { + dev->vmalist = pt->next; + } + DRM(free)(pt, sizeof(*pt), DRM_MEM_VMAS); + break; + } + } + up(&dev->struct_sem); +} + +int DRM(mmap_dma)(struct file *filp, struct vm_area_struct *vma) +{ + drm_file_t *priv = filp->private_data; + drm_device_t *dev; + drm_device_dma_t *dma; + unsigned long length = vma->vm_end - vma->vm_start; + + lock_kernel(); + dev = priv->dev; + dma = dev->dma; + DRM_DEBUG("start = 0x%lx, end = 0x%lx, offset = 0x%lx\n", + vma->vm_start, vma->vm_end, VM_OFFSET(vma)); + + /* Length must match exact page count */ + if (!dma || (length >> PAGE_SHIFT) != dma->page_count) { + unlock_kernel(); + return -EINVAL; + } + unlock_kernel(); + + vma->vm_ops = &DRM(vm_dma_ops); + vma->vm_flags |= VM_LOCKED | VM_SHM; /* Don't swap */ + +#if LINUX_VERSION_CODE < 0x020203 /* KERNEL_VERSION(2,2,3) */ + /* In Linux 2.2.3 and above, this is + handled in do_mmap() in mm/mmap.c. */ + ++filp->f_count; +#endif + vma->vm_file = filp; /* Needed for drm_vm_open() */ + DRM(vm_open)(vma); + return 0; +} + +#ifndef DRIVER_GET_MAP_OFS +#define DRIVER_GET_MAP_OFS() (map->offset) +#endif + +#ifndef DRIVER_GET_REG_OFS +#ifdef __alpha__ +#define DRIVER_GET_REG_OFS() (dev->hose->dense_mem_base - \ + dev->hose->mem_space->start) +#else +#define DRIVER_GET_REG_OFS() 0 +#endif +#endif + +int DRM(mmap)(struct file *filp, struct vm_area_struct *vma) +{ + drm_file_t *priv = filp->private_data; + drm_device_t *dev = priv->dev; + drm_map_t *map = NULL; + drm_map_list_t *r_list; + unsigned long offset = 0; + struct list_head *list; + + DRM_DEBUG("start = 0x%lx, end = 0x%lx, offset = 0x%lx\n", + vma->vm_start, vma->vm_end, VM_OFFSET(vma)); + + if ( !priv->authenticated ) return -EACCES; + + if (!VM_OFFSET(vma)) return DRM(mmap_dma)(filp, vma); + + /* A sequential search of a linked list is + fine here because: 1) there will only be + about 5-10 entries in the list and, 2) a + DRI client only has to do this mapping + once, so it doesn't have to be optimized + for performance, even if the list was a + bit longer. */ + list_for_each(list, &dev->maplist->head) { + unsigned long off; + + r_list = (drm_map_list_t *)list; + map = r_list->map; + if (!map) continue; + off = DRIVER_GET_MAP_OFS(); + if (off == VM_OFFSET(vma)) break; + } + + if (!map || ((map->flags&_DRM_RESTRICTED) && !capable(CAP_SYS_ADMIN))) + return -EPERM; + + /* Check for valid size. */ + if (map->size != vma->vm_end - vma->vm_start) return -EINVAL; + + if (!capable(CAP_SYS_ADMIN) && (map->flags & _DRM_READ_ONLY)) { + vma->vm_flags &= VM_MAYWRITE; +#if defined(__i386__) + pgprot_val(vma->vm_page_prot) &= ~_PAGE_RW; +#else + /* Ye gads this is ugly. With more thought + we could move this up higher and use + `protection_map' instead. */ + vma->vm_page_prot = __pgprot(pte_val(pte_wrprotect( + __pte(pgprot_val(vma->vm_page_prot))))); +#endif + } + + switch (map->type) { + case _DRM_FRAME_BUFFER: + case _DRM_REGISTERS: + case _DRM_AGP: + if (VM_OFFSET(vma) >= __pa(high_memory)) { +#if defined(__i386__) + if (boot_cpu_data.x86 > 3 && map->type != _DRM_AGP) { + pgprot_val(vma->vm_page_prot) |= _PAGE_PCD; + pgprot_val(vma->vm_page_prot) &= ~_PAGE_PWT; + } +#elif defined(__ia64__) + if (map->type != _DRM_AGP) + vma->vm_page_prot = + pgprot_writecombine(vma->vm_page_prot); +#elif defined(__powerpc__) + pgprot_val(vma->vm_page_prot) |= _PAGE_NO_CACHE | _PAGE_GUARDED; +#endif + vma->vm_flags |= VM_IO; /* not in core dump */ + } + offset = DRIVER_GET_REG_OFS(); + if (remap_page_range(vma->vm_start, + VM_OFFSET(vma) + offset, + vma->vm_end - vma->vm_start, + vma->vm_page_prot)) + return -EAGAIN; + DRM_DEBUG(" Type = %d; start = 0x%lx, end = 0x%lx," + " offset = 0x%lx\n", + map->type, + vma->vm_start, vma->vm_end, VM_OFFSET(vma) + offset); + vma->vm_ops = &DRM(vm_ops); + break; + case _DRM_SHM: + vma->vm_ops = &DRM(vm_shm_ops); +#if LINUX_VERSION_CODE >= 0x020300 + vma->vm_private_data = (void *)map; +#else + vma->vm_pte = (unsigned long)map; +#endif + /* Don't let this area swap. Change when + DRM_KERNEL advisory is supported. */ + vma->vm_flags |= VM_LOCKED; + break; + case _DRM_SCATTER_GATHER: + vma->vm_ops = &DRM(vm_sg_ops); +#if LINUX_VERSION_CODE >= 0x020300 + vma->vm_private_data = (void *)map; +#else + vma->vm_pte = (unsigned long)map; +#endif + vma->vm_flags |= VM_LOCKED; + break; + default: + return -EINVAL; /* This should never happen. */ + } + vma->vm_flags |= VM_LOCKED | VM_SHM; /* Don't swap */ + +#if LINUX_VERSION_CODE < 0x020203 /* KERNEL_VERSION(2,2,3) */ + /* In Linux 2.2.3 and above, this is + handled in do_mmap() in mm/mmap.c. */ + ++filp->f_count; +#endif + vma->vm_file = filp; /* Needed for drm_vm_open() */ + DRM(vm_open)(vma); + return 0; +} diff -urpN linux-2.4.9-linus/drivers/char/drm/ffb_drv.c linux-2.4.9-larpage/drivers/char/drm/ffb_drv.c --- linux-2.4.9-linus/drivers/char/drm/ffb_drv.c 2001-08-12 11:23:32.000000000 -0700 +++ linux-2.4.9-larpage/drivers/char/drm/ffb_drv.c 2002-11-20 02:02:38.000000000 -0800 @@ -279,7 +279,7 @@ static unsigned long ffb_get_unmapped_ar unsigned long pgoff, unsigned long flags) { - drm_map_t *map = ffb_find_map(filp, pgoff << PAGE_SHIFT); + drm_map_t *map = ffb_find_map(filp, pgoff << MMUPAGE_SHIFT); unsigned long addr = -ENOMEM; if (!map) @@ -292,11 +292,11 @@ static unsigned long ffb_get_unmapped_ar #else addr = get_unmapped_area(NULL, hint, len, pgoff, flags); #endif - } else if (map->type == _DRM_SHM && SHMLBA > PAGE_SIZE) { - unsigned long slack = SHMLBA - PAGE_SIZE; + } else if (map->type == _DRM_SHM && SHMLBA > MMUPAGE_SIZE) { + unsigned long slack = SHMLBA - MMUPAGE_SIZE; addr = get_unmapped_area(NULL, hint, len + slack, pgoff, flags); - if (!(addr & ~PAGE_MASK)) { + if (!(addr & ~MMUPAGE_MASK)) { unsigned long kvirt = (unsigned long) map->handle; if ((kvirt & (SHMLBA - 1)) != (addr & (SHMLBA - 1))) { diff -urpN linux-2.4.9-linus/drivers/char/drm/ffb_drv.c.orig linux-2.4.9-larpage/drivers/char/drm/ffb_drv.c.orig --- linux-2.4.9-linus/drivers/char/drm/ffb_drv.c.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/drivers/char/drm/ffb_drv.c.orig 2002-11-20 02:02:37.000000000 -0800 @@ -0,0 +1,399 @@ +/* $Id: ffb_drv.c,v 1.15 2001/08/09 17:47:51 davem Exp $ + * ffb_drv.c: Creator/Creator3D direct rendering driver. + * + * Copyright (C) 2000 David S. Miller (davem@redhat.com) + */ + +#include +#include "ffb.h" +#include "drmP.h" + +#include "ffb_drv.h" + +#include +#include +#include +#include +#include + +#define DRIVER_AUTHOR "David S. Miller" + +#define DRIVER_NAME "ffb" +#define DRIVER_DESC "Creator/Creator3D" +#define DRIVER_DATE "20000517" + +#define DRIVER_MAJOR 0 +#define DRIVER_MINOR 0 +#define DRIVER_PATCHLEVEL 1 + +#define DRIVER_FOPS \ +static struct file_operations DRM(fops) = { \ + owner: THIS_MODULE, \ + open: DRM(open), \ + flush: DRM(flush), \ + release: DRM(release), \ + ioctl: DRM(ioctl), \ + mmap: DRM(mmap), \ + read: DRM(read), \ + fasync: DRM(fasync), \ + poll: DRM(poll), \ + get_unmapped_area: ffb_get_unmapped_area, \ +} + +#define DRIVER_COUNT_CARDS() ffb_count_card_instances() +/* Allocate private structure and fill it */ +#define DRIVER_PRESETUP() do { \ + int _ret; \ + _ret = ffb_presetup(dev); \ + if(_ret != 0) return _ret; \ +} while(0) + +/* Free private structure */ +#define DRIVER_PRETAKEDOWN() do { \ + if(dev->dev_private) kfree(dev->dev_private); \ +} while(0) + +#define DRIVER_POSTCLEANUP() do { \ + if(ffb_position != NULL) kfree(ffb_position); \ +} while(0) + +/* We have to free up the rogue hw context state holding error or + * else we will leak it. + */ +#define DRIVER_RELEASE() do { \ + ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) dev->dev_private; \ + int context = _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock); \ + int idx; \ + \ + idx = context - 1; \ + if (fpriv && fpriv->hw_state[idx] != NULL) { \ + kfree(fpriv->hw_state[idx]); \ + fpriv->hw_state[idx] = NULL; \ + } \ +} while(0) + +/* For mmap customization */ +#define DRIVER_GET_MAP_OFS() (map->offset & 0xffffffff) +#define DRIVER_GET_REG_OFS() ffb_get_reg_offset(dev) + +typedef struct _ffb_position_t { + int node; + int root; +} ffb_position_t; + +static ffb_position_t *ffb_position; + +static void get_ffb_type(ffb_dev_priv_t *ffb_priv, int instance) +{ + volatile unsigned char *strap_bits; + unsigned char val; + + strap_bits = (volatile unsigned char *) + (ffb_priv->card_phys_base + 0x00200000UL); + + /* Don't ask, you have to read the value twice for whatever + * reason to get correct contents. + */ + val = upa_readb(strap_bits); + val = upa_readb(strap_bits); + switch (val & 0x78) { + case (0x0 << 5) | (0x0 << 3): + ffb_priv->ffb_type = ffb1_prototype; + printk("ffb%d: Detected FFB1 pre-FCS prototype\n", instance); + break; + case (0x0 << 5) | (0x1 << 3): + ffb_priv->ffb_type = ffb1_standard; + printk("ffb%d: Detected FFB1\n", instance); + break; + case (0x0 << 5) | (0x3 << 3): + ffb_priv->ffb_type = ffb1_speedsort; + printk("ffb%d: Detected FFB1-SpeedSort\n", instance); + break; + case (0x1 << 5) | (0x0 << 3): + ffb_priv->ffb_type = ffb2_prototype; + printk("ffb%d: Detected FFB2/vertical pre-FCS prototype\n", instance); + break; + case (0x1 << 5) | (0x1 << 3): + ffb_priv->ffb_type = ffb2_vertical; + printk("ffb%d: Detected FFB2/vertical\n", instance); + break; + case (0x1 << 5) | (0x2 << 3): + ffb_priv->ffb_type = ffb2_vertical_plus; + printk("ffb%d: Detected FFB2+/vertical\n", instance); + break; + case (0x2 << 5) | (0x0 << 3): + ffb_priv->ffb_type = ffb2_horizontal; + printk("ffb%d: Detected FFB2/horizontal\n", instance); + break; + case (0x2 << 5) | (0x2 << 3): + ffb_priv->ffb_type = ffb2_horizontal; + printk("ffb%d: Detected FFB2+/horizontal\n", instance); + break; + default: + ffb_priv->ffb_type = ffb2_vertical; + printk("ffb%d: Unknown boardID[%08x], assuming FFB2\n", instance, val); + break; + }; +} + +static void ffb_apply_upa_parent_ranges(int parent, + struct linux_prom64_registers *regs) +{ + struct linux_prom64_ranges ranges[PROMREG_MAX]; + char name[128]; + int len, i; + + prom_getproperty(parent, "name", name, sizeof(name)); + if (strcmp(name, "upa") != 0) + return; + + len = prom_getproperty(parent, "ranges", (void *) ranges, sizeof(ranges)); + if (len <= 0) + return; + + len /= sizeof(struct linux_prom64_ranges); + for (i = 0; i < len; i++) { + struct linux_prom64_ranges *rng = &ranges[i]; + u64 phys_addr = regs->phys_addr; + + if (phys_addr >= rng->ot_child_base && + phys_addr < (rng->ot_child_base + rng->or_size)) { + regs->phys_addr -= rng->ot_child_base; + regs->phys_addr += rng->ot_parent_base; + return; + } + } + + return; +} + +static int ffb_init_one(drm_device_t *dev, int prom_node, int parent_node, + int instance) +{ + struct linux_prom64_registers regs[2*PROMREG_MAX]; + ffb_dev_priv_t *ffb_priv = (ffb_dev_priv_t *)dev->dev_private; + int i; + + ffb_priv->prom_node = prom_node; + if (prom_getproperty(ffb_priv->prom_node, "reg", + (void *)regs, sizeof(regs)) <= 0) { + return -EINVAL; + } + ffb_apply_upa_parent_ranges(parent_node, ®s[0]); + ffb_priv->card_phys_base = regs[0].phys_addr; + ffb_priv->regs = (ffb_fbcPtr) + (regs[0].phys_addr + 0x00600000UL); + get_ffb_type(ffb_priv, instance); + for (i = 0; i < FFB_MAX_CTXS; i++) + ffb_priv->hw_state[i] = NULL; + + return 0; +} + +static int __init ffb_count_siblings(int root) +{ + int node, child, count = 0; + + child = prom_getchild(root); + for (node = prom_searchsiblings(child, "SUNW,ffb"); node; + node = prom_searchsiblings(prom_getsibling(node), "SUNW,ffb")) + count++; + + return count; +} + +static int __init ffb_scan_siblings(int root, int instance) +{ + int node, child; + + child = prom_getchild(root); + for (node = prom_searchsiblings(child, "SUNW,ffb"); node; + node = prom_searchsiblings(prom_getsibling(node), "SUNW,ffb")) { + ffb_position[instance].node = node; + ffb_position[instance].root = root; + instance++; + } + + return instance; +} + +static int ffb_presetup(drm_device_t *); + +static int __init ffb_count_card_instances(void) +{ + int root, total, instance; + + total = ffb_count_siblings(prom_root_node); + root = prom_getchild(prom_root_node); + for (root = prom_searchsiblings(root, "upa"); root; + root = prom_searchsiblings(prom_getsibling(root), "upa")) + total += ffb_count_siblings(root); + + ffb_position = kmalloc(sizeof(ffb_position_t) * total, GFP_KERNEL); + + /* Actual failure will be caught during ffb_presetup b/c we can't catch + * it easily here. + */ + if (!ffb_position) + return -ENOMEM; + + instance = ffb_scan_siblings(prom_root_node, 0); + + root = prom_getchild(prom_root_node); + for (root = prom_searchsiblings(root, "upa"); root; + root = prom_searchsiblings(prom_getsibling(root), "upa")) + instance = ffb_scan_siblings(root, instance); + + return total; +} + +static drm_map_t *ffb_find_map(struct file *filp, unsigned long off) +{ + drm_file_t *priv = filp->private_data; + drm_device_t *dev; + drm_map_list_t *r_list; + struct list_head *list; + drm_map_t *map; + + if (!priv || (dev = priv->dev) == NULL) + return NULL; + + list_for_each(list, &dev->maplist->head) { + unsigned long uoff; + + r_list = (drm_map_list_t *)list; + map = r_list->map; + if (!map) + continue; + uoff = (map->offset & 0xffffffff); + if (uoff == off) + return map; + } + + return NULL; +} + +static unsigned long ffb_get_unmapped_area(struct file *filp, + unsigned long hint, + unsigned long len, + unsigned long pgoff, + unsigned long flags) +{ + drm_map_t *map = ffb_find_map(filp, pgoff << MMUPAGE_SHIFT); + unsigned long addr = -ENOMEM; + + if (!map) + return get_unmapped_area(NULL, hint, len, pgoff, flags); + + if (map->type == _DRM_FRAME_BUFFER || + map->type == _DRM_REGISTERS) { +#ifdef HAVE_ARCH_FB_UNMAPPED_AREA + addr = get_fb_unmapped_area(filp, hint, len, pgoff, flags); +#else + addr = get_unmapped_area(NULL, hint, len, pgoff, flags); +#endif + } else if (map->type == _DRM_SHM && SHMLBA > PAGE_SIZE) { + unsigned long slack = SHMLBA - PAGE_SIZE; + + addr = get_unmapped_area(NULL, hint, len + slack, pgoff, flags); + if (!(addr & ~PAGE_MASK)) { + unsigned long kvirt = (unsigned long) map->handle; + + if ((kvirt & (SHMLBA - 1)) != (addr & (SHMLBA - 1))) { + unsigned long koff, aoff; + + koff = kvirt & (SHMLBA - 1); + aoff = addr & (SHMLBA - 1); + if (koff < aoff) + koff += SHMLBA; + + addr += (koff - aoff); + } + } + } else { + addr = get_unmapped_area(NULL, hint, len, pgoff, flags); + } + + return addr; +} + +static unsigned long ffb_get_reg_offset(drm_device_t *dev) +{ + ffb_dev_priv_t *ffb_priv = (ffb_dev_priv_t *)dev->dev_private; + + if (ffb_priv) + return ffb_priv->card_phys_base; + + return 0; +} + +#include "drm_auth.h" +#include "drm_bufs.h" +#include "drm_dma.h" +#include "drm_drawable.h" +#include "drm_drv.h" + +/* This functions must be here since it references DRM(numdevs) + * which drm_drv.h declares. + */ +static int ffb_presetup(drm_device_t *dev) +{ + ffb_dev_priv_t *ffb_priv; + drm_device_t *temp_dev; + int ret = 0; + int i; + + /* Check for the case where no device was found. */ + if (ffb_position == NULL) + return -ENODEV; + + /* Find our instance number by finding our device in dev structure */ + for (i = 0; i < DRM(numdevs); i++) { + temp_dev = &(DRM(device)[i]); + if(temp_dev == dev) + break; + } + + if (i == DRM(numdevs)) + return -ENODEV; + + ffb_priv = kmalloc(sizeof(ffb_dev_priv_t), GFP_KERNEL); + if (!ffb_priv) + return -ENOMEM; + memset(ffb_priv, 0, sizeof(*ffb_priv)); + dev->dev_private = ffb_priv; + + ret = ffb_init_one(dev, + ffb_position[i].node, + ffb_position[i].root, + i); + return ret; +} + +#ifndef MODULE +/* DRM(options) is called by the kernel to parse command-line options + * passed via the boot-loader (e.g., LILO). It calls the insmod option + * routine, drm_parse_drm. + */ + +/* JH- We have to hand expand the string ourselves because of the cpp. If + * anyone can think of a way that we can fit into the __setup macro without + * changing it, then please send the solution my way. + */ +static int __init ffb_options(char *str) +{ + DRM(parse_options)(str); + return 1; +} + +__setup(DRIVER_NAME "=", ffb_options); +#endif + +#include "drm_fops.h" +#include "drm_init.h" +#include "drm_ioctl.h" +#include "drm_lock.h" +#include "drm_memory.h" +#include "drm_proc.h" +#include "drm_vm.h" +#include "drm_stub.h" diff -urpN linux-2.4.9-linus/drivers/char/mem.c linux-2.4.9-larpage/drivers/char/mem.c --- linux-2.4.9-linus/drivers/char/mem.c 2001-07-28 12:37:23.000000000 -0700 +++ linux-2.4.9-larpage/drivers/char/mem.c 2002-11-20 02:02:40.000000000 -0800 @@ -189,7 +189,7 @@ static inline int noncached_address(unsi static int mmap_mem(struct file * file, struct vm_area_struct * vma) { - unsigned long offset = vma->vm_pgoff << PAGE_SHIFT; + unsigned long offset = vma->vm_pgoff << MMUPAGE_SHIFT; /* * Accessing memory above the top the kernel knows about or @@ -352,56 +352,77 @@ static inline size_t read_zero_pagealign struct mm_struct *mm; struct vm_area_struct * vma; unsigned long addr=(unsigned long)buf; + unsigned long unwritten; + unsigned long count; mm = current->mm; - /* Oops, this was forgotten before. -ben */ down_read(&mm->mmap_sem); + vma = find_vma(mm, addr); - /* For private mappings, just map in zero pages. */ - for (vma = find_vma(mm, addr); vma; vma = vma->vm_next) { - unsigned long count; - - if (vma->vm_start > addr || (vma->vm_flags & VM_WRITE) == 0) - goto out_up; - if (vma->vm_flags & VM_SHARED) - break; - count = vma->vm_end - addr; - if (count > size) - count = size; - - zap_page_range(mm, addr, count); - zeromap_page_range(addr, count, PAGE_COPY); + if (PAGE_MMUSHIFT && vma) { + /* + * Align zeropages with pages native to this vma: for + * better swap allocation and for kio page assumptions. + */ + count = ~PAGE_MASK & (vma->vm_start - addr - + (vma->vm_pgoff << MMUPAGE_SHIFT)); + if (count) { + up_read(&mm->mmap_sem); + unwritten = clear_user(buf, count); + size -= count; + if (unwritten || size < PAGE_SIZE) + return size + unwritten; + buf += count; + addr = (unsigned long)buf; + down_read(&mm->mmap_sem); + vma = find_vma(mm, addr); + } + } + /* For private mappings, just map in zero pages. */ + for (count = 0; vma; vma = vma->vm_next) { + if (vma->vm_start > addr || + (vma->vm_flags & VM_SHARED) || + !(vma->vm_flags & VM_WRITE)) + break; + count += vma->vm_end - addr; + if (count > size) { + count = size & PAGE_MASK; + addr = (unsigned long)buf + count; + break; + } + addr = vma->vm_end; + if (count == size) { + /* allow odd mmupages at end of vma */ + break; + } + } + if (count) { + zap_page_range(mm, addr - count, addr); + zeromap_page_range(addr - count, addr, PAGE_COPY); size -= count; - buf += count; - addr += count; - if (size == 0) - goto out_up; } up_read(&mm->mmap_sem); /* The shared case is hard. Let's do the conventional zeroing. */ - do { - unsigned long unwritten = clear_user(buf, PAGE_SIZE); + while (size >= PAGE_SIZE) { + unwritten = clear_user((void *)addr, PAGE_SIZE); if (unwritten) return size + unwritten - PAGE_SIZE; if (current->need_resched) schedule(); - buf += PAGE_SIZE; + addr += PAGE_SIZE; size -= PAGE_SIZE; - } while (size); + } return size; -out_up: - up_read(&mm->mmap_sem); - return size; } static ssize_t read_zero(struct file * file, char * buf, size_t count, loff_t *ppos) { - unsigned long left, unwritten, written = 0; + unsigned long unwritten, written = 0; if (!count) return 0; @@ -409,38 +430,39 @@ static ssize_t read_zero(struct file * f if (!access_ok(VERIFY_WRITE, buf, count)) return -EFAULT; - left = count; - /* do we want to be clever? Arbitrary cut-off */ - if (count >= PAGE_SIZE*4) { + if (count >= MMUPAGE_SIZE*4) { unsigned long partial; /* How much left of the page? */ - partial = (PAGE_SIZE-1) & -(unsigned long) buf; + partial = ~MMUPAGE_MASK & -(unsigned long)buf; unwritten = clear_user(buf, partial); written = partial - unwritten; if (unwritten) goto out; - left -= partial; + count -= partial; buf += partial; - unwritten = read_zero_pagealigned(buf, left & PAGE_MASK); - written += (left & PAGE_MASK) - unwritten; - if (unwritten) - goto out; - buf += left & PAGE_MASK; - left &= ~PAGE_MASK; + if (!PAGE_MMUSHIFT || count >= PAGE_SIZE) { + unwritten = read_zero_pagealigned(buf, count); + written += count - unwritten; + if (unwritten >= PAGE_SIZE) + goto out; + buf += count - unwritten; + count = unwritten; + } } - unwritten = clear_user(buf, left); - written += left - unwritten; + unwritten = clear_user(buf, count); + written += count - unwritten; out: return written ? written : -EFAULT; } static int mmap_zero(struct file * file, struct vm_area_struct * vma) { + vma->vm_pgoff = 0; if (vma->vm_flags & VM_SHARED) return shmem_zero_setup(vma); - if (zeromap_page_range(vma->vm_start, vma->vm_end - vma->vm_start, vma->vm_page_prot)) + if (zeromap_page_range(vma->vm_start, vma->vm_end, vma->vm_page_prot)) return -EAGAIN; return 0; } diff -urpN linux-2.4.9-linus/drivers/char/mem.c.orig linux-2.4.9-larpage/drivers/char/mem.c.orig --- linux-2.4.9-linus/drivers/char/mem.c.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/drivers/char/mem.c.orig 2002-11-20 02:02:40.000000000 -0800 @@ -0,0 +1,671 @@ +/* + * linux/drivers/char/mem.c + * + * Copyright (C) 1991, 1992 Linus Torvalds + * + * Added devfs support. + * Jan-11-1998, C. Scott Ananian + * Shared /dev/zero mmaping support, Feb 2000, Kanoj Sarcar + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +#ifdef CONFIG_I2C +extern int i2c_init_all(void); +#endif +#ifdef CONFIG_FB +extern void fbmem_init(void); +#endif +#ifdef CONFIG_PROM_CONSOLE +extern void prom_con_init(void); +#endif +#ifdef CONFIG_MDA_CONSOLE +extern void mda_console_init(void); +#endif +#if defined(CONFIG_S390_TAPE) && defined(CONFIG_S390_TAPE_CHAR) +extern void tapechar_init(void); +#endif +#if defined(CONFIG_ADB) +extern void adbdev_init(void); +#endif + +static ssize_t do_write_mem(struct file * file, void *p, unsigned long realp, + const char * buf, size_t count, loff_t *ppos) +{ + ssize_t written; + + written = 0; +#if defined(__sparc__) || defined(__mc68000__) + /* we don't have page 0 mapped on sparc and m68k.. */ + if (realp < PAGE_SIZE) { + unsigned long sz = PAGE_SIZE-realp; + if (sz > count) sz = count; + /* Hmm. Do something? */ + buf+=sz; + p+=sz; + count-=sz; + written+=sz; + } +#endif + if (copy_from_user(p, buf, count)) + return -EFAULT; + written += count; + *ppos += written; + return written; +} + + +/* + * This funcion reads the *physical* memory. The f_pos points directly to the + * memory location. + */ +static ssize_t read_mem(struct file * file, char * buf, + size_t count, loff_t *ppos) +{ + unsigned long p = *ppos; + unsigned long end_mem; + ssize_t read; + + end_mem = __pa(high_memory); + if (p >= end_mem) + return 0; + if (count > end_mem - p) + count = end_mem - p; + read = 0; +#if defined(__sparc__) || defined(__mc68000__) + /* we don't have page 0 mapped on sparc and m68k.. */ + if (p < PAGE_SIZE) { + unsigned long sz = PAGE_SIZE-p; + if (sz > count) + sz = count; + if (sz > 0) { + if (clear_user(buf, sz)) + return -EFAULT; + buf += sz; + p += sz; + count -= sz; + read += sz; + } + } +#endif + if (copy_to_user(buf, __va(p), count)) + return -EFAULT; + read += count; + *ppos += read; + return read; +} + +static ssize_t write_mem(struct file * file, const char * buf, + size_t count, loff_t *ppos) +{ + unsigned long p = *ppos; + unsigned long end_mem; + + end_mem = __pa(high_memory); + if (p >= end_mem) + return 0; + if (count > end_mem - p) + count = end_mem - p; + return do_write_mem(file, __va(p), p, buf, count, ppos); +} + +#ifndef pgprot_noncached + +/* + * This should probably be per-architecture in + */ +static inline pgprot_t pgprot_noncached(pgprot_t _prot) +{ + unsigned long prot = pgprot_val(_prot); + +#if defined(__i386__) + /* On PPro and successors, PCD alone doesn't always mean + uncached because of interactions with the MTRRs. PCD | PWT + means definitely uncached. */ + if (boot_cpu_data.x86 > 3) + prot |= _PAGE_PCD | _PAGE_PWT; +#elif defined(__powerpc__) + prot |= _PAGE_NO_CACHE | _PAGE_GUARDED; +#elif defined(__mc68000__) +#ifdef SUN3_PAGE_NOCACHE + if (MMU_IS_SUN3) + prot |= SUN3_PAGE_NOCACHE; + else +#endif + if (MMU_IS_851 || MMU_IS_030) + prot |= _PAGE_NOCACHE030; + /* Use no-cache mode, serialized */ + else if (MMU_IS_040 || MMU_IS_060) + prot = (prot & _CACHEMASK040) | _PAGE_NOCACHE_S; +#elif defined(__mips__) + prot = (prot & ~_CACHE_MASK) | _CACHE_UNCACHED; +#endif + + return __pgprot(prot); +} + +#endif /* !pgprot_noncached */ + +/* + * Architectures vary in how they handle caching for addresses + * outside of main memory. + */ +static inline int noncached_address(unsigned long addr) +{ +#if defined(__i386__) + /* + * On the PPro and successors, the MTRRs are used to set + * memory types for physical addresses outside main memory, + * so blindly setting PCD or PWT on those pages is wrong. + * For Pentiums and earlier, the surround logic should disable + * caching for the high addresses through the KEN pin, but + * we maintain the tradition of paranoia in this code. + */ + return !( test_bit(X86_FEATURE_MTRR, &boot_cpu_data.x86_capability) || + test_bit(X86_FEATURE_K6_MTRR, &boot_cpu_data.x86_capability) || + test_bit(X86_FEATURE_CYRIX_ARR, &boot_cpu_data.x86_capability) || + test_bit(X86_FEATURE_CENTAUR_MCR, &boot_cpu_data.x86_capability) ) + && addr >= __pa(high_memory); +#else + return addr >= __pa(high_memory); +#endif +} + +static int mmap_mem(struct file * file, struct vm_area_struct * vma) +{ + unsigned long offset = vma->vm_pgoff << MMUPAGE_SHIFT; + + /* + * Accessing memory above the top the kernel knows about or + * through a file pointer that was marked O_SYNC will be + * done non-cached. + */ + if (noncached_address(offset) || (file->f_flags & O_SYNC)) + vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); + + /* Don't try to swap out physical pages.. */ + vma->vm_flags |= VM_RESERVED; + + /* + * Don't dump addresses that are not real memory to a core file. + */ + if (offset >= __pa(high_memory) || (file->f_flags & O_SYNC)) + vma->vm_flags |= VM_IO; + + if (remap_page_range(vma->vm_start, offset, vma->vm_end-vma->vm_start, + vma->vm_page_prot)) + return -EAGAIN; + return 0; +} + +/* + * This function reads the *virtual* memory as seen by the kernel. + */ +static ssize_t read_kmem(struct file *file, char *buf, + size_t count, loff_t *ppos) +{ + unsigned long p = *ppos; + ssize_t read = 0; + ssize_t virtr = 0; + char * kbuf; /* k-addr because vread() takes vmlist_lock rwlock */ + + if (p < (unsigned long) high_memory) { + read = count; + if (count > (unsigned long) high_memory - p) + read = (unsigned long) high_memory - p; + +#if defined(__sparc__) || defined(__mc68000__) + /* we don't have page 0 mapped on sparc and m68k.. */ + if (p < PAGE_SIZE && read > 0) { + size_t tmp = PAGE_SIZE - p; + if (tmp > read) tmp = read; + if (clear_user(buf, tmp)) + return -EFAULT; + buf += tmp; + p += tmp; + read -= tmp; + count -= tmp; + } +#endif + if (copy_to_user(buf, (char *)p, read)) + return -EFAULT; + p += read; + buf += read; + count -= read; + } + + if (count > 0) { + kbuf = (char *)__get_free_page(GFP_KERNEL); + if (!kbuf) + return -ENOMEM; + while (count > 0) { + int len = count; + + if (len > PAGE_SIZE) + len = PAGE_SIZE; + len = vread(kbuf, (char *)p, len); + if (!len) + break; + if (copy_to_user(buf, kbuf, len)) { + free_page((unsigned long)kbuf); + return -EFAULT; + } + count -= len; + buf += len; + virtr += len; + p += len; + } + free_page((unsigned long)kbuf); + } + *ppos = p; + return virtr + read; +} + +/* + * This function writes to the *virtual* memory as seen by the kernel. + */ +static ssize_t write_kmem(struct file * file, const char * buf, + size_t count, loff_t *ppos) +{ + unsigned long p = *ppos; + + if (p >= (unsigned long) high_memory) + return 0; + if (count > (unsigned long) high_memory - p) + count = (unsigned long) high_memory - p; + return do_write_mem(file, (void*)p, p, buf, count, ppos); +} + +#if !defined(__mc68000__) +static ssize_t read_port(struct file * file, char * buf, + size_t count, loff_t *ppos) +{ + unsigned long i = *ppos; + char *tmp = buf; + + if (verify_area(VERIFY_WRITE,buf,count)) + return -EFAULT; + while (count-- > 0 && i < 65536) { + if (__put_user(inb(i),tmp) < 0) + return -EFAULT; + i++; + tmp++; + } + *ppos = i; + return tmp-buf; +} + +static ssize_t write_port(struct file * file, const char * buf, + size_t count, loff_t *ppos) +{ + unsigned long i = *ppos; + const char * tmp = buf; + + if (verify_area(VERIFY_READ,buf,count)) + return -EFAULT; + while (count-- > 0 && i < 65536) { + char c; + if (__get_user(c, tmp)) + return -EFAULT; + outb(c,i); + i++; + tmp++; + } + *ppos = i; + return tmp-buf; +} +#endif + +static ssize_t read_null(struct file * file, char * buf, + size_t count, loff_t *ppos) +{ + return 0; +} + +static ssize_t write_null(struct file * file, const char * buf, + size_t count, loff_t *ppos) +{ + return count; +} + +/* + * For fun, we are using the MMU for this. + */ +static inline size_t read_zero_pagealigned(char * buf, size_t size) +{ + struct mm_struct *mm; + struct vm_area_struct * vma; + unsigned long addr=(unsigned long)buf; + unsigned long unwritten; + unsigned long count; + + mm = current->mm; + down_read(&mm->mmap_sem); + vma = find_vma(mm, addr); + + if (PAGE_MMUSHIFT && vma) { + /* + * Align zeropages with pages native to this vma: for + * better swap allocation and for kio page assumptions. + */ + count = ~PAGE_MASK & (vma->vm_start - addr - + (vma->vm_pgoff << MMUPAGE_SHIFT)); + if (count) { + up_read(&mm->mmap_sem); + unwritten = clear_user(buf, count); + size -= count; + if (unwritten || size < PAGE_SIZE) + return size + unwritten; + buf += count; + addr = (unsigned long)buf; + down_read(&mm->mmap_sem); + vma = find_vma(mm, addr); + } + } + + /* For private mappings, just map in zero pages. */ + for (count = 0; vma; vma = vma->vm_next) { + if (vma->vm_start > addr || + (vma->vm_flags & VM_SHARED) || + !(vma->vm_flags & VM_WRITE)) + break; + count += vma->vm_end - addr; + if (count > size) { + count = size & PAGE_MASK; + addr = (unsigned long)buf + count; + break; + } + addr = vma->vm_end; + if (count == size) { + /* allow odd mmupages at end of vma */ + break; + } + } + if (count) { + zap_page_range(mm, addr - count, addr); + zeromap_page_range(addr - count, addr, PAGE_COPY); + size -= count; + } + + up_read(&mm->mmap_sem); + + /* The shared case is hard. Let's do the conventional zeroing. */ + while (size >= PAGE_SIZE) { + unwritten = clear_user((void *)addr, PAGE_SIZE); + if (unwritten) + return size + unwritten - PAGE_SIZE; + if (current->need_resched) + schedule(); + addr += PAGE_SIZE; + size -= PAGE_SIZE; + } + + return size; +} + +static ssize_t read_zero(struct file * file, char * buf, + size_t count, loff_t *ppos) +{ + unsigned long unwritten, written = 0; + + if (!count) + return 0; + + if (!access_ok(VERIFY_WRITE, buf, count)) + return -EFAULT; + + left = count; + + /* do we want to be clever? Arbitrary cut-off */ + if (count >= PAGE_SIZE*4) { + unsigned long partial; + + /* How much left of the page? */ + partial = (PAGE_SIZE-1) & -(unsigned long) buf; + unwritten = clear_user(buf, partial); + written = partial - unwritten; + if (unwritten) + goto out; + left -= partial; + buf += partial; + unwritten = read_zero_pagealigned(buf, left & PAGE_MASK); + written += (left & PAGE_MASK) - unwritten; + if (unwritten) + goto out; + buf += left & PAGE_MASK; + left &= ~PAGE_MASK; + } + unwritten = clear_user(buf, left); + written += left - unwritten; +out: + return written ? written : -EFAULT; +} + +static int mmap_zero(struct file * file, struct vm_area_struct * vma) +{ + if (vma->vm_flags & VM_SHARED) + return shmem_zero_setup(vma); + if (zeromap_page_range(vma->vm_start, vma->vm_end - vma->vm_start, vma->vm_page_prot)) + return -EAGAIN; + return 0; +} + +static ssize_t write_full(struct file * file, const char * buf, + size_t count, loff_t *ppos) +{ + return -ENOSPC; +} + +/* + * Special lseek() function for /dev/null and /dev/zero. Most notably, you + * can fopen() both devices with "a" now. This was previously impossible. + * -- SRB. + */ + +static loff_t null_lseek(struct file * file, loff_t offset, int orig) +{ + return file->f_pos = 0; +} + +/* + * The memory devices use the full 32/64 bits of the offset, and so we cannot + * check against negative addresses: they are ok. The return value is weird, + * though, in that case (0). + * + * also note that seeking relative to the "end of file" isn't supported: + * it has no meaning, so it returns -EINVAL. + */ +static loff_t memory_lseek(struct file * file, loff_t offset, int orig) +{ + switch (orig) { + case 0: + file->f_pos = offset; + return file->f_pos; + case 1: + file->f_pos += offset; + return file->f_pos; + default: + return -EINVAL; + } +} + +static int open_port(struct inode * inode, struct file * filp) +{ + return capable(CAP_SYS_RAWIO) ? 0 : -EPERM; +} + +#define mmap_kmem mmap_mem +#define zero_lseek null_lseek +#define full_lseek null_lseek +#define write_zero write_null +#define read_full read_zero +#define open_mem open_port +#define open_kmem open_mem + +static struct file_operations mem_fops = { + llseek: memory_lseek, + read: read_mem, + write: write_mem, + mmap: mmap_mem, + open: open_mem, +}; + +static struct file_operations kmem_fops = { + llseek: memory_lseek, + read: read_kmem, + write: write_kmem, + mmap: mmap_kmem, + open: open_kmem, +}; + +static struct file_operations null_fops = { + llseek: null_lseek, + read: read_null, + write: write_null, +}; + +#if !defined(__mc68000__) +static struct file_operations port_fops = { + llseek: memory_lseek, + read: read_port, + write: write_port, + open: open_port, +}; +#endif + +static struct file_operations zero_fops = { + llseek: zero_lseek, + read: read_zero, + write: write_zero, + mmap: mmap_zero, +}; + +static struct file_operations full_fops = { + llseek: full_lseek, + read: read_full, + write: write_full, +}; + +static int memory_open(struct inode * inode, struct file * filp) +{ + switch (MINOR(inode->i_rdev)) { + case 1: + filp->f_op = &mem_fops; + break; + case 2: + filp->f_op = &kmem_fops; + break; + case 3: + filp->f_op = &null_fops; + break; +#if !defined(__mc68000__) + case 4: + filp->f_op = &port_fops; + break; +#endif + case 5: + filp->f_op = &zero_fops; + break; + case 7: + filp->f_op = &full_fops; + break; + case 8: + filp->f_op = &random_fops; + break; + case 9: + filp->f_op = &urandom_fops; + break; + default: + return -ENXIO; + } + if (filp->f_op && filp->f_op->open) + return filp->f_op->open(inode,filp); + return 0; +} + +void __init memory_devfs_register (void) +{ + /* These are never unregistered */ + static const struct { + unsigned short minor; + char *name; + umode_t mode; + struct file_operations *fops; + } list[] = { /* list of minor devices */ + {1, "mem", S_IRUSR | S_IWUSR | S_IRGRP, &mem_fops}, + {2, "kmem", S_IRUSR | S_IWUSR | S_IRGRP, &kmem_fops}, + {3, "null", S_IRUGO | S_IWUGO, &null_fops}, + {4, "port", S_IRUSR | S_IWUSR | S_IRGRP, &port_fops}, + {5, "zero", S_IRUGO | S_IWUGO, &zero_fops}, + {7, "full", S_IRUGO | S_IWUGO, &full_fops}, + {8, "random", S_IRUGO | S_IWUSR, &random_fops}, + {9, "urandom", S_IRUGO | S_IWUSR, &urandom_fops} + }; + int i; + + for (i=0; i<(sizeof(list)/sizeof(*list)); i++) + devfs_register (NULL, list[i].name, DEVFS_FL_NONE, + MEM_MAJOR, list[i].minor, + list[i].mode | S_IFCHR, + list[i].fops, NULL); +} + +static struct file_operations memory_fops = { + open: memory_open, /* just a selector for the real open */ +}; + +int __init chr_dev_init(void) +{ + if (devfs_register_chrdev(MEM_MAJOR,"mem",&memory_fops)) + printk("unable to get major %d for memory devs\n", MEM_MAJOR); + memory_devfs_register(); + rand_initialize(); +#ifdef CONFIG_I2C + i2c_init_all(); +#endif +#if defined (CONFIG_FB) + fbmem_init(); +#endif +#if defined (CONFIG_PROM_CONSOLE) + prom_con_init(); +#endif +#if defined (CONFIG_MDA_CONSOLE) + mda_console_init(); +#endif + tty_init(); +#ifdef CONFIG_M68K_PRINTER + lp_m68k_init(); +#endif + misc_init(); +#if CONFIG_QIC02_TAPE + qic02_tape_init(); +#endif +#ifdef CONFIG_FTAPE + ftape_init(); +#endif +#if defined(CONFIG_S390_TAPE) && defined(CONFIG_S390_TAPE_CHAR) + tapechar_init(); +#endif +#if defined(CONFIG_ADB) + adbdev_init(); +#endif + return 0; +} + +__initcall(chr_dev_init); diff -urpN linux-2.4.9-linus/drivers/char/synclink.c linux-2.4.9-larpage/drivers/char/synclink.c --- linux-2.4.9-linus/drivers/char/synclink.c 2001-07-25 14:12:02.000000000 -0700 +++ linux-2.4.9-larpage/drivers/char/synclink.c 2002-11-20 02:02:41.000000000 -0800 @@ -145,6 +145,7 @@ MGSL_PARAMS default_params = { #define SHARED_MEM_ADDRESS_SIZE 0x40000 #define BUFFERLISTSIZE (PAGE_SIZE) #define DMABUFFERSIZE (PAGE_SIZE) +#define MAXDMABUFS ((SHARED_MEM_ADDRESS_SIZE-2*BUFFERLISTSIZE)/DMABUFFERSIZE) #define MAXRXFRAMES 7 typedef struct _DMABUFFERENTRY @@ -3873,7 +3874,7 @@ int mgsl_allocate_dma_buffers(struct mgs if ( info->bus_type == MGSL_BUS_TYPE_PCI ) { /* * The PCI adapter has 256KBytes of shared memory to use. - * This is 64 PAGE_SIZE buffers. + * This is 64 4K pages. * * The first page is used for padding at this time so the * buffer list does not begin at offset 0 of the PCI @@ -3883,7 +3884,7 @@ int mgsl_allocate_dma_buffers(struct mgs * list can hold 128 DMA_BUFFER structures at 32 bytes * each. * - * This leaves 62 4K pages. + * This leaves 62 (MAXDMABUFS) 4K pages. * * The next N pages are used for transmit frame(s). We * reserve enough 4K page blocks to hold the required @@ -3894,7 +3895,7 @@ int mgsl_allocate_dma_buffers(struct mgs * be used to receive full MaxFrameSize inbound frames */ info->tx_buffer_count = info->num_tx_dma_buffers * BuffersPerFrame; - info->rx_buffer_count = 62 - info->tx_buffer_count; + info->rx_buffer_count = MAXDMABUFS - info->tx_buffer_count; } else { /* Calculate the number of PAGE_SIZE buffers needed for */ /* receive and transmit DMA buffers. */ @@ -3910,15 +3911,21 @@ int mgsl_allocate_dma_buffers(struct mgs info->rx_buffer_count = (BuffersPerFrame * MAXRXFRAMES) + 6; /* - * limit total TxBuffers & RxBuffers to 62 4K total + * limit total TxBuffers & RxBuffers to MAXDMABUFS * (ala PCI Allocation) */ - if ( (info->tx_buffer_count + info->rx_buffer_count) > 62 ) - info->rx_buffer_count = 62 - info->tx_buffer_count; + if (info->tx_buffer_count + info->rx_buffer_count > MAXDMABUFS) + info->rx_buffer_count = MAXDMABUFS - info->tx_buffer_count; } + if (info->tx_buffer_count >= MAXDMABUFS) { + printk("%s(%d):TX buffer count %u exceeds %u\n", + __FILE__,__LINE__,info->tx_buffer_count,MAXDMABUFS-1); + return -ENOMEM; + } + if ( debug_level >= DEBUG_LEVEL_INFO ) printk("%s(%d):Allocating %d TX and %d RX DMA buffers.\n", __FILE__,__LINE__, info->tx_buffer_count,info->rx_buffer_count); diff -urpN linux-2.4.9-linus/drivers/char/synclink.c.orig linux-2.4.9-larpage/drivers/char/synclink.c.orig --- linux-2.4.9-linus/drivers/char/synclink.c.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/drivers/char/synclink.c.orig 2002-11-20 02:02:41.000000000 -0800 @@ -0,0 +1,8223 @@ +/* + * linux/drivers/char/synclink.c + * + * $Id: synclink.c,v 3.12 2001/07/18 19:14:21 paulkf Exp $ + * + * Device driver for Microgate SyncLink ISA and PCI + * high speed multiprotocol serial adapters. + * + * written by Paul Fulghum for Microgate Corporation + * paulkf@microgate.com + * + * Microgate and SyncLink are trademarks of Microgate Corporation + * + * Derived from serial.c written by Theodore Ts'o and Linus Torvalds + * + * Original release 01/11/99 + * + * This code is released under the GNU General Public License (GPL) + * + * This driver is primarily intended for use in synchronous + * HDLC mode. Asynchronous mode is also provided. + * + * When operating in synchronous mode, each call to mgsl_write() + * contains exactly one complete HDLC frame. Calling mgsl_put_char + * will start assembling an HDLC frame that will not be sent until + * mgsl_flush_chars or mgsl_write is called. + * + * Synchronous receive data is reported as complete frames. To accomplish + * this, the TTY flip buffer is bypassed (too small to hold largest + * frame and may fragment frames) and the line discipline + * receive entry point is called directly. + * + * This driver has been tested with a slightly modified ppp.c driver + * for synchronous PPP. + * + * 2000/02/16 + * Added interface for syncppp.c driver (an alternate synchronous PPP + * implementation that also supports Cisco HDLC). Each device instance + * registers as a tty device AND a network device (if dosyncppp option + * is set for the device). The functionality is determined by which + * device interface is opened. + * + * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED + * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES + * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE + * DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED + * OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#define VERSION(ver,rel,seq) (((ver)<<16) | ((rel)<<8) | (seq)) +#if defined(__i386__) +# define BREAKPOINT() asm(" int $3"); +#else +# define BREAKPOINT() { } +#endif + +#define MAX_ISA_DEVICES 10 +#define MAX_PCI_DEVICES 10 +#define MAX_TOTAL_DEVICES 20 + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#include +#include +#include + +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include + +#ifdef CONFIG_SYNCLINK_SYNCPPP_MODULE +#define CONFIG_SYNCLINK_SYNCPPP 1 +#endif + +#ifdef CONFIG_SYNCLINK_SYNCPPP +#if LINUX_VERSION_CODE < VERSION(2,4,3) +#include "../net/wan/syncppp.h" +#else +#include +#endif +#endif + +#include +#define GET_USER(error,value,addr) error = get_user(value,addr) +#define COPY_FROM_USER(error,dest,src,size) error = copy_from_user(dest,src,size) ? -EFAULT : 0 +#define PUT_USER(error,value,addr) error = put_user(value,addr) +#define COPY_TO_USER(error,dest,src,size) error = copy_to_user(dest,src,size) ? -EFAULT : 0 + +#include + +#include "linux/synclink.h" + +#define RCLRVALUE 0xffff + +MGSL_PARAMS default_params = { + MGSL_MODE_HDLC, /* unsigned long mode */ + 0, /* unsigned char loopback; */ + HDLC_FLAG_UNDERRUN_ABORT15, /* unsigned short flags; */ + HDLC_ENCODING_NRZI_SPACE, /* unsigned char encoding; */ + 0, /* unsigned long clock_speed; */ + 0xff, /* unsigned char addr_filter; */ + HDLC_CRC_16_CCITT, /* unsigned short crc_type; */ + HDLC_PREAMBLE_LENGTH_8BITS, /* unsigned char preamble_length; */ + HDLC_PREAMBLE_PATTERN_NONE, /* unsigned char preamble; */ + 9600, /* unsigned long data_rate; */ + 8, /* unsigned char data_bits; */ + 1, /* unsigned char stop_bits; */ + ASYNC_PARITY_NONE /* unsigned char parity; */ +}; + +#define SHARED_MEM_ADDRESS_SIZE 0x40000 +#define BUFFERLISTSIZE (PAGE_SIZE) +#define DMABUFFERSIZE (PAGE_SIZE) +#define MAXDMABUFS ((SHARED_MEM_ADDRESS_SIZE-2*BUFFERLISTSIZE)/DMABUFFERSIZE) +#define MAXRXFRAMES 7 + +typedef struct _DMABUFFERENTRY +{ + u32 phys_addr; /* 32-bit flat physical address of data buffer */ + u16 count; /* buffer size/data count */ + u16 status; /* Control/status field */ + u16 rcc; /* character count field */ + u16 reserved; /* padding required by 16C32 */ + u32 link; /* 32-bit flat link to next buffer entry */ + char *virt_addr; /* virtual address of data buffer */ + u32 phys_entry; /* physical address of this buffer entry */ +} DMABUFFERENTRY, *DMAPBUFFERENTRY; + +/* The queue of BH actions to be performed */ + +#define BH_RECEIVE 1 +#define BH_TRANSMIT 2 +#define BH_STATUS 4 + +#define IO_PIN_SHUTDOWN_LIMIT 100 + +#define RELEVANT_IFLAG(iflag) (iflag & (IGNBRK|BRKINT|IGNPAR|PARMRK|INPCK)) + +struct _input_signal_events { + int ri_up; + int ri_down; + int dsr_up; + int dsr_down; + int dcd_up; + int dcd_down; + int cts_up; + int cts_down; +}; + +/* transmit holding buffer definitions*/ +#define MAX_TX_HOLDING_BUFFERS 5 +struct tx_holding_buffer { + int buffer_size; + unsigned char * buffer; +}; + + +/* + * Device instance data structure + */ + +struct mgsl_struct { + void *if_ptr; /* General purpose pointer (used by SPPP) */ + int magic; + int flags; + int count; /* count of opens */ + int line; + unsigned short close_delay; + unsigned short closing_wait; /* time to wait before closing */ + + struct mgsl_icount icount; + + struct termios normal_termios; + struct termios callout_termios; + + struct tty_struct *tty; + int timeout; + int x_char; /* xon/xoff character */ + int blocked_open; /* # of blocked opens */ + long session; /* Session of opening process */ + long pgrp; /* pgrp of opening process */ + u16 read_status_mask; + u16 ignore_status_mask; + unsigned char *xmit_buf; + int xmit_head; + int xmit_tail; + int xmit_cnt; + + wait_queue_head_t open_wait; + wait_queue_head_t close_wait; + + wait_queue_head_t status_event_wait_q; + wait_queue_head_t event_wait_q; + struct timer_list tx_timer; /* HDLC transmit timeout timer */ + struct mgsl_struct *next_device; /* device list link */ + + spinlock_t irq_spinlock; /* spinlock for synchronizing with ISR */ + struct tq_struct task; /* task structure for scheduling bh */ + + u32 EventMask; /* event trigger mask */ + u32 RecordedEvents; /* pending events */ + + u32 max_frame_size; /* as set by device config */ + + u32 pending_bh; + + int bh_running; /* Protection from multiple */ + int isr_overflow; + int bh_requested; + + int dcd_chkcount; /* check counts to prevent */ + int cts_chkcount; /* too many IRQs if a signal */ + int dsr_chkcount; /* is floating */ + int ri_chkcount; + + char *buffer_list; /* virtual address of Rx & Tx buffer lists */ + unsigned long buffer_list_phys; + + unsigned int rx_buffer_count; /* count of total allocated Rx buffers */ + DMABUFFERENTRY *rx_buffer_list; /* list of receive buffer entries */ + unsigned int current_rx_buffer; + + int num_tx_dma_buffers; /* number of tx dma frames required */ + int tx_dma_buffers_used; + unsigned int tx_buffer_count; /* count of total allocated Tx buffers */ + DMABUFFERENTRY *tx_buffer_list; /* list of transmit buffer entries */ + int start_tx_dma_buffer; /* tx dma buffer to start tx dma operation */ + int current_tx_buffer; /* next tx dma buffer to be loaded */ + + unsigned char *intermediate_rxbuffer; + + int num_tx_holding_buffers; /* number of tx holding buffer allocated */ + int get_tx_holding_index; /* next tx holding buffer for adapter to load */ + int put_tx_holding_index; /* next tx holding buffer to store user request */ + int tx_holding_count; /* number of tx holding buffers waiting */ + struct tx_holding_buffer tx_holding_buffers[MAX_TX_HOLDING_BUFFERS]; + + int rx_enabled; + int rx_overflow; + + int tx_enabled; + int tx_active; + u32 idle_mode; + + u16 cmr_value; + u16 tcsr_value; + + char device_name[25]; /* device instance name */ + + unsigned int bus_type; /* expansion bus type (ISA,EISA,PCI) */ + unsigned char bus; /* expansion bus number (zero based) */ + unsigned char function; /* PCI device number */ + + unsigned int io_base; /* base I/O address of adapter */ + unsigned int io_addr_size; /* size of the I/O address range */ + int io_addr_requested; /* nonzero if I/O address requested */ + + unsigned int irq_level; /* interrupt level */ + unsigned long irq_flags; + int irq_requested; /* nonzero if IRQ requested */ + + unsigned int dma_level; /* DMA channel */ + int dma_requested; /* nonzero if dma channel requested */ + + u16 mbre_bit; + u16 loopback_bits; + u16 usc_idle_mode; + + MGSL_PARAMS params; /* communications parameters */ + + unsigned char serial_signals; /* current serial signal states */ + + int irq_occurred; /* for diagnostics use */ + unsigned int init_error; /* Initialization startup error (DIAGS) */ + int fDiagnosticsmode; /* Driver in Diagnostic mode? (DIAGS) */ + + u32 last_mem_alloc; + unsigned char* memory_base; /* shared memory address (PCI only) */ + u32 phys_memory_base; + int shared_mem_requested; + + unsigned char* lcr_base; /* local config registers (PCI only) */ + u32 phys_lcr_base; + u32 lcr_offset; + int lcr_mem_requested; + + u32 misc_ctrl_value; + char flag_buf[MAX_ASYNC_BUFFER_SIZE]; + char char_buf[MAX_ASYNC_BUFFER_SIZE]; + BOOLEAN drop_rts_on_tx_done; + + BOOLEAN loopmode_insert_requested; + BOOLEAN loopmode_send_done_requested; + + struct _input_signal_events input_signal_events; + + /* SPPP/Cisco HDLC device parts */ + int netcount; + int dosyncppp; + spinlock_t netlock; +#ifdef CONFIG_SYNCLINK_SYNCPPP + struct ppp_device pppdev; + char netname[10]; + struct net_device *netdev; + struct net_device_stats netstats; + struct net_device netdevice; +#endif +}; + +#define MGSL_MAGIC 0x5401 + +/* + * The size of the serial xmit buffer is 1 page, or 4096 bytes + */ +#ifndef SERIAL_XMIT_SIZE +#define SERIAL_XMIT_SIZE 4096 +#endif + +/* + * These macros define the offsets used in calculating the + * I/O address of the specified USC registers. + */ + + +#define DCPIN 2 /* Bit 1 of I/O address */ +#define SDPIN 4 /* Bit 2 of I/O address */ + +#define DCAR 0 /* DMA command/address register */ +#define CCAR SDPIN /* channel command/address register */ +#define DATAREG DCPIN + SDPIN /* serial data register */ +#define MSBONLY 0x41 +#define LSBONLY 0x40 + +/* + * These macros define the register address (ordinal number) + * used for writing address/value pairs to the USC. + */ + +#define CMR 0x02 /* Channel mode Register */ +#define CCSR 0x04 /* Channel Command/status Register */ +#define CCR 0x06 /* Channel Control Register */ +#define PSR 0x08 /* Port status Register */ +#define PCR 0x0a /* Port Control Register */ +#define TMDR 0x0c /* Test mode Data Register */ +#define TMCR 0x0e /* Test mode Control Register */ +#define CMCR 0x10 /* Clock mode Control Register */ +#define HCR 0x12 /* Hardware Configuration Register */ +#define IVR 0x14 /* Interrupt Vector Register */ +#define IOCR 0x16 /* Input/Output Control Register */ +#define ICR 0x18 /* Interrupt Control Register */ +#define DCCR 0x1a /* Daisy Chain Control Register */ +#define MISR 0x1c /* Misc Interrupt status Register */ +#define SICR 0x1e /* status Interrupt Control Register */ +#define RDR 0x20 /* Receive Data Register */ +#define RMR 0x22 /* Receive mode Register */ +#define RCSR 0x24 /* Receive Command/status Register */ +#define RICR 0x26 /* Receive Interrupt Control Register */ +#define RSR 0x28 /* Receive Sync Register */ +#define RCLR 0x2a /* Receive count Limit Register */ +#define RCCR 0x2c /* Receive Character count Register */ +#define TC0R 0x2e /* Time Constant 0 Register */ +#define TDR 0x30 /* Transmit Data Register */ +#define TMR 0x32 /* Transmit mode Register */ +#define TCSR 0x34 /* Transmit Command/status Register */ +#define TICR 0x36 /* Transmit Interrupt Control Register */ +#define TSR 0x38 /* Transmit Sync Register */ +#define TCLR 0x3a /* Transmit count Limit Register */ +#define TCCR 0x3c /* Transmit Character count Register */ +#define TC1R 0x3e /* Time Constant 1 Register */ + + +/* + * MACRO DEFINITIONS FOR DMA REGISTERS + */ + +#define DCR 0x06 /* DMA Control Register (shared) */ +#define DACR 0x08 /* DMA Array count Register (shared) */ +#define BDCR 0x12 /* Burst/Dwell Control Register (shared) */ +#define DIVR 0x14 /* DMA Interrupt Vector Register (shared) */ +#define DICR 0x18 /* DMA Interrupt Control Register (shared) */ +#define CDIR 0x1a /* Clear DMA Interrupt Register (shared) */ +#define SDIR 0x1c /* Set DMA Interrupt Register (shared) */ + +#define TDMR 0x02 /* Transmit DMA mode Register */ +#define TDIAR 0x1e /* Transmit DMA Interrupt Arm Register */ +#define TBCR 0x2a /* Transmit Byte count Register */ +#define TARL 0x2c /* Transmit Address Register (low) */ +#define TARU 0x2e /* Transmit Address Register (high) */ +#define NTBCR 0x3a /* Next Transmit Byte count Register */ +#define NTARL 0x3c /* Next Transmit Address Register (low) */ +#define NTARU 0x3e /* Next Transmit Address Register (high) */ + +#define RDMR 0x82 /* Receive DMA mode Register (non-shared) */ +#define RDIAR 0x9e /* Receive DMA Interrupt Arm Register */ +#define RBCR 0xaa /* Receive Byte count Register */ +#define RARL 0xac /* Receive Address Register (low) */ +#define RARU 0xae /* Receive Address Register (high) */ +#define NRBCR 0xba /* Next Receive Byte count Register */ +#define NRARL 0xbc /* Next Receive Address Register (low) */ +#define NRARU 0xbe /* Next Receive Address Register (high) */ + + +/* + * MACRO DEFINITIONS FOR MODEM STATUS BITS + */ + +#define MODEMSTATUS_DTR 0x80 +#define MODEMSTATUS_DSR 0x40 +#define MODEMSTATUS_RTS 0x20 +#define MODEMSTATUS_CTS 0x10 +#define MODEMSTATUS_RI 0x04 +#define MODEMSTATUS_DCD 0x01 + + +/* + * Channel Command/Address Register (CCAR) Command Codes + */ + +#define RTCmd_Null 0x0000 +#define RTCmd_ResetHighestIus 0x1000 +#define RTCmd_TriggerChannelLoadDma 0x2000 +#define RTCmd_TriggerRxDma 0x2800 +#define RTCmd_TriggerTxDma 0x3000 +#define RTCmd_TriggerRxAndTxDma 0x3800 +#define RTCmd_PurgeRxFifo 0x4800 +#define RTCmd_PurgeTxFifo 0x5000 +#define RTCmd_PurgeRxAndTxFifo 0x5800 +#define RTCmd_LoadRcc 0x6800 +#define RTCmd_LoadTcc 0x7000 +#define RTCmd_LoadRccAndTcc 0x7800 +#define RTCmd_LoadTC0 0x8800 +#define RTCmd_LoadTC1 0x9000 +#define RTCmd_LoadTC0AndTC1 0x9800 +#define RTCmd_SerialDataLSBFirst 0xa000 +#define RTCmd_SerialDataMSBFirst 0xa800 +#define RTCmd_SelectBigEndian 0xb000 +#define RTCmd_SelectLittleEndian 0xb800 + + +/* + * DMA Command/Address Register (DCAR) Command Codes + */ + +#define DmaCmd_Null 0x0000 +#define DmaCmd_ResetTxChannel 0x1000 +#define DmaCmd_ResetRxChannel 0x1200 +#define DmaCmd_StartTxChannel 0x2000 +#define DmaCmd_StartRxChannel 0x2200 +#define DmaCmd_ContinueTxChannel 0x3000 +#define DmaCmd_ContinueRxChannel 0x3200 +#define DmaCmd_PauseTxChannel 0x4000 +#define DmaCmd_PauseRxChannel 0x4200 +#define DmaCmd_AbortTxChannel 0x5000 +#define DmaCmd_AbortRxChannel 0x5200 +#define DmaCmd_InitTxChannel 0x7000 +#define DmaCmd_InitRxChannel 0x7200 +#define DmaCmd_ResetHighestDmaIus 0x8000 +#define DmaCmd_ResetAllChannels 0x9000 +#define DmaCmd_StartAllChannels 0xa000 +#define DmaCmd_ContinueAllChannels 0xb000 +#define DmaCmd_PauseAllChannels 0xc000 +#define DmaCmd_AbortAllChannels 0xd000 +#define DmaCmd_InitAllChannels 0xf000 + +#define TCmd_Null 0x0000 +#define TCmd_ClearTxCRC 0x2000 +#define TCmd_SelectTicrTtsaData 0x4000 +#define TCmd_SelectTicrTxFifostatus 0x5000 +#define TCmd_SelectTicrIntLevel 0x6000 +#define TCmd_SelectTicrdma_level 0x7000 +#define TCmd_SendFrame 0x8000 +#define TCmd_SendAbort 0x9000 +#define TCmd_EnableDleInsertion 0xc000 +#define TCmd_DisableDleInsertion 0xd000 +#define TCmd_ClearEofEom 0xe000 +#define TCmd_SetEofEom 0xf000 + +#define RCmd_Null 0x0000 +#define RCmd_ClearRxCRC 0x2000 +#define RCmd_EnterHuntmode 0x3000 +#define RCmd_SelectRicrRtsaData 0x4000 +#define RCmd_SelectRicrRxFifostatus 0x5000 +#define RCmd_SelectRicrIntLevel 0x6000 +#define RCmd_SelectRicrdma_level 0x7000 + +/* + * Bits for enabling and disabling IRQs in Interrupt Control Register (ICR) + */ + +#define RECEIVE_STATUS BIT5 +#define RECEIVE_DATA BIT4 +#define TRANSMIT_STATUS BIT3 +#define TRANSMIT_DATA BIT2 +#define IO_PIN BIT1 +#define MISC BIT0 + + +/* + * Receive status Bits in Receive Command/status Register RCSR + */ + +#define RXSTATUS_SHORT_FRAME BIT8 +#define RXSTATUS_CODE_VIOLATION BIT8 +#define RXSTATUS_EXITED_HUNT BIT7 +#define RXSTATUS_IDLE_RECEIVED BIT6 +#define RXSTATUS_BREAK_RECEIVED BIT5 +#define RXSTATUS_ABORT_RECEIVED BIT5 +#define RXSTATUS_RXBOUND BIT4 +#define RXSTATUS_CRC_ERROR BIT3 +#define RXSTATUS_FRAMING_ERROR BIT3 +#define RXSTATUS_ABORT BIT2 +#define RXSTATUS_PARITY_ERROR BIT2 +#define RXSTATUS_OVERRUN BIT1 +#define RXSTATUS_DATA_AVAILABLE BIT0 +#define RXSTATUS_ALL 0x01f6 +#define usc_UnlatchRxstatusBits(a,b) usc_OutReg( (a), RCSR, (u16)((b) & RXSTATUS_ALL) ) + +/* + * Values for setting transmit idle mode in + * Transmit Control/status Register (TCSR) + */ +#define IDLEMODE_FLAGS 0x0000 +#define IDLEMODE_ALT_ONE_ZERO 0x0100 +#define IDLEMODE_ZERO 0x0200 +#define IDLEMODE_ONE 0x0300 +#define IDLEMODE_ALT_MARK_SPACE 0x0500 +#define IDLEMODE_SPACE 0x0600 +#define IDLEMODE_MARK 0x0700 +#define IDLEMODE_MASK 0x0700 + +/* + * IUSC revision identifiers + */ +#define IUSC_SL1660 0x4d44 +#define IUSC_PRE_SL1660 0x4553 + +/* + * Transmit status Bits in Transmit Command/status Register (TCSR) + */ + +#define TCSR_PRESERVE 0x0F00 + +#define TCSR_UNDERWAIT BIT11 +#define TXSTATUS_PREAMBLE_SENT BIT7 +#define TXSTATUS_IDLE_SENT BIT6 +#define TXSTATUS_ABORT_SENT BIT5 +#define TXSTATUS_EOF_SENT BIT4 +#define TXSTATUS_EOM_SENT BIT4 +#define TXSTATUS_CRC_SENT BIT3 +#define TXSTATUS_ALL_SENT BIT2 +#define TXSTATUS_UNDERRUN BIT1 +#define TXSTATUS_FIFO_EMPTY BIT0 +#define TXSTATUS_ALL 0x00fa +#define usc_UnlatchTxstatusBits(a,b) usc_OutReg( (a), TCSR, (u16)((a)->tcsr_value + ((b) & 0x00FF)) ) + + +#define MISCSTATUS_RXC_LATCHED BIT15 +#define MISCSTATUS_RXC BIT14 +#define MISCSTATUS_TXC_LATCHED BIT13 +#define MISCSTATUS_TXC BIT12 +#define MISCSTATUS_RI_LATCHED BIT11 +#define MISCSTATUS_RI BIT10 +#define MISCSTATUS_DSR_LATCHED BIT9 +#define MISCSTATUS_DSR BIT8 +#define MISCSTATUS_DCD_LATCHED BIT7 +#define MISCSTATUS_DCD BIT6 +#define MISCSTATUS_CTS_LATCHED BIT5 +#define MISCSTATUS_CTS BIT4 +#define MISCSTATUS_RCC_UNDERRUN BIT3 +#define MISCSTATUS_DPLL_NO_SYNC BIT2 +#define MISCSTATUS_BRG1_ZERO BIT1 +#define MISCSTATUS_BRG0_ZERO BIT0 + +#define usc_UnlatchIostatusBits(a,b) usc_OutReg((a),MISR,(u16)((b) & 0xaaa0)) +#define usc_UnlatchMiscstatusBits(a,b) usc_OutReg((a),MISR,(u16)((b) & 0x000f)) + +#define SICR_RXC_ACTIVE BIT15 +#define SICR_RXC_INACTIVE BIT14 +#define SICR_RXC (BIT15+BIT14) +#define SICR_TXC_ACTIVE BIT13 +#define SICR_TXC_INACTIVE BIT12 +#define SICR_TXC (BIT13+BIT12) +#define SICR_RI_ACTIVE BIT11 +#define SICR_RI_INACTIVE BIT10 +#define SICR_RI (BIT11+BIT10) +#define SICR_DSR_ACTIVE BIT9 +#define SICR_DSR_INACTIVE BIT8 +#define SICR_DSR (BIT9+BIT8) +#define SICR_DCD_ACTIVE BIT7 +#define SICR_DCD_INACTIVE BIT6 +#define SICR_DCD (BIT7+BIT6) +#define SICR_CTS_ACTIVE BIT5 +#define SICR_CTS_INACTIVE BIT4 +#define SICR_CTS (BIT5+BIT4) +#define SICR_RCC_UNDERFLOW BIT3 +#define SICR_DPLL_NO_SYNC BIT2 +#define SICR_BRG1_ZERO BIT1 +#define SICR_BRG0_ZERO BIT0 + +void usc_DisableMasterIrqBit( struct mgsl_struct *info ); +void usc_EnableMasterIrqBit( struct mgsl_struct *info ); +void usc_EnableInterrupts( struct mgsl_struct *info, u16 IrqMask ); +void usc_DisableInterrupts( struct mgsl_struct *info, u16 IrqMask ); +void usc_ClearIrqPendingBits( struct mgsl_struct *info, u16 IrqMask ); + +#define usc_EnableInterrupts( a, b ) \ + usc_OutReg( (a), ICR, (u16)((usc_InReg((a),ICR) & 0xff00) + 0xc0 + (b)) ) + +#define usc_DisableInterrupts( a, b ) \ + usc_OutReg( (a), ICR, (u16)((usc_InReg((a),ICR) & 0xff00) + 0x80 + (b)) ) + +#define usc_EnableMasterIrqBit(a) \ + usc_OutReg( (a), ICR, (u16)((usc_InReg((a),ICR) & 0x0f00) + 0xb000) ) + +#define usc_DisableMasterIrqBit(a) \ + usc_OutReg( (a), ICR, (u16)(usc_InReg((a),ICR) & 0x7f00) ) + +#define usc_ClearIrqPendingBits( a, b ) usc_OutReg( (a), DCCR, 0x40 + (b) ) + +/* + * Transmit status Bits in Transmit Control status Register (TCSR) + * and Transmit Interrupt Control Register (TICR) (except BIT2, BIT0) + */ + +#define TXSTATUS_PREAMBLE_SENT BIT7 +#define TXSTATUS_IDLE_SENT BIT6 +#define TXSTATUS_ABORT_SENT BIT5 +#define TXSTATUS_EOF BIT4 +#define TXSTATUS_CRC_SENT BIT3 +#define TXSTATUS_ALL_SENT BIT2 +#define TXSTATUS_UNDERRUN BIT1 +#define TXSTATUS_FIFO_EMPTY BIT0 + +#define DICR_MASTER BIT15 +#define DICR_TRANSMIT BIT0 +#define DICR_RECEIVE BIT1 + +#define usc_EnableDmaInterrupts(a,b) \ + usc_OutDmaReg( (a), DICR, (u16)(usc_InDmaReg((a),DICR) | (b)) ) + +#define usc_DisableDmaInterrupts(a,b) \ + usc_OutDmaReg( (a), DICR, (u16)(usc_InDmaReg((a),DICR) & ~(b)) ) + +#define usc_EnableStatusIrqs(a,b) \ + usc_OutReg( (a), SICR, (u16)(usc_InReg((a),SICR) | (b)) ) + +#define usc_DisablestatusIrqs(a,b) \ + usc_OutReg( (a), SICR, (u16)(usc_InReg((a),SICR) & ~(b)) ) + +/* Transmit status Bits in Transmit Control status Register (TCSR) */ +/* and Transmit Interrupt Control Register (TICR) (except BIT2, BIT0) */ + + +#define DISABLE_UNCONDITIONAL 0 +#define DISABLE_END_OF_FRAME 1 +#define ENABLE_UNCONDITIONAL 2 +#define ENABLE_AUTO_CTS 3 +#define ENABLE_AUTO_DCD 3 +#define usc_EnableTransmitter(a,b) \ + usc_OutReg( (a), TMR, (u16)((usc_InReg((a),TMR) & 0xfffc) | (b)) ) +#define usc_EnableReceiver(a,b) \ + usc_OutReg( (a), RMR, (u16)((usc_InReg((a),RMR) & 0xfffc) | (b)) ) + +u16 usc_InDmaReg( struct mgsl_struct *info, u16 Port ); +void usc_OutDmaReg( struct mgsl_struct *info, u16 Port, u16 Value ); +void usc_DmaCmd( struct mgsl_struct *info, u16 Cmd ); + +u16 usc_InReg( struct mgsl_struct *info, u16 Port ); +void usc_OutReg( struct mgsl_struct *info, u16 Port, u16 Value ); +void usc_RTCmd( struct mgsl_struct *info, u16 Cmd ); +void usc_RCmd( struct mgsl_struct *info, u16 Cmd ); +void usc_TCmd( struct mgsl_struct *info, u16 Cmd ); + +#define usc_TCmd(a,b) usc_OutReg((a), TCSR, (u16)((a)->tcsr_value + (b))) +#define usc_RCmd(a,b) usc_OutReg((a), RCSR, (b)) + +#define usc_SetTransmitSyncChars(a,s0,s1) usc_OutReg((a), TSR, (u16)(((u16)s0<<8)|(u16)s1)) + +void usc_process_rxoverrun_sync( struct mgsl_struct *info ); +void usc_start_receiver( struct mgsl_struct *info ); +void usc_stop_receiver( struct mgsl_struct *info ); + +void usc_start_transmitter( struct mgsl_struct *info ); +void usc_stop_transmitter( struct mgsl_struct *info ); +void usc_set_txidle( struct mgsl_struct *info ); +void usc_load_txfifo( struct mgsl_struct *info ); + +void usc_enable_aux_clock( struct mgsl_struct *info, u32 DataRate ); +void usc_enable_loopback( struct mgsl_struct *info, int enable ); + +void usc_get_serial_signals( struct mgsl_struct *info ); +void usc_set_serial_signals( struct mgsl_struct *info ); + +void usc_reset( struct mgsl_struct *info ); + +void usc_set_sync_mode( struct mgsl_struct *info ); +void usc_set_sdlc_mode( struct mgsl_struct *info ); +void usc_set_async_mode( struct mgsl_struct *info ); +void usc_enable_async_clock( struct mgsl_struct *info, u32 DataRate ); + +void usc_loopback_frame( struct mgsl_struct *info ); + +void mgsl_tx_timeout(unsigned long context); + + +void usc_loopmode_cancel_transmit( struct mgsl_struct * info ); +void usc_loopmode_insert_request( struct mgsl_struct * info ); +int usc_loopmode_active( struct mgsl_struct * info); +void usc_loopmode_send_done( struct mgsl_struct * info ); +int usc_loopmode_send_active( struct mgsl_struct * info ); + +int mgsl_ioctl_common(struct mgsl_struct *info, unsigned int cmd, unsigned long arg); + +#ifdef CONFIG_SYNCLINK_SYNCPPP +/* SPPP/HDLC stuff */ +void mgsl_sppp_init(struct mgsl_struct *info); +void mgsl_sppp_delete(struct mgsl_struct *info); +int mgsl_sppp_open(struct net_device *d); +int mgsl_sppp_close(struct net_device *d); +void mgsl_sppp_tx_timeout(struct net_device *d); +int mgsl_sppp_tx(struct sk_buff *skb, struct net_device *d); +void mgsl_sppp_rx_done(struct mgsl_struct *info, char *buf, int size); +void mgsl_sppp_tx_done(struct mgsl_struct *info); +int mgsl_sppp_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd); +struct net_device_stats *mgsl_net_stats(struct net_device *dev); +#endif + +/* + * Defines a BUS descriptor value for the PCI adapter + * local bus address ranges. + */ + +#define BUS_DESCRIPTOR( WrHold, WrDly, RdDly, Nwdd, Nwad, Nxda, Nrdd, Nrad ) \ +(0x00400020 + \ +((WrHold) << 30) + \ +((WrDly) << 28) + \ +((RdDly) << 26) + \ +((Nwdd) << 20) + \ +((Nwad) << 15) + \ +((Nxda) << 13) + \ +((Nrdd) << 11) + \ +((Nrad) << 6) ) + +void mgsl_trace_block(struct mgsl_struct *info,const char* data, int count, int xmit); + +/* + * Adapter diagnostic routines + */ +BOOLEAN mgsl_register_test( struct mgsl_struct *info ); +BOOLEAN mgsl_irq_test( struct mgsl_struct *info ); +BOOLEAN mgsl_dma_test( struct mgsl_struct *info ); +BOOLEAN mgsl_memory_test( struct mgsl_struct *info ); +int mgsl_adapter_test( struct mgsl_struct *info ); + +/* + * device and resource management routines + */ +int mgsl_claim_resources(struct mgsl_struct *info); +void mgsl_release_resources(struct mgsl_struct *info); +void mgsl_add_device(struct mgsl_struct *info); +struct mgsl_struct* mgsl_allocate_device(void); +int mgsl_enum_isa_devices(void); + +/* + * DMA buffer manupulation functions. + */ +void mgsl_free_rx_frame_buffers( struct mgsl_struct *info, unsigned int StartIndex, unsigned int EndIndex ); +int mgsl_get_rx_frame( struct mgsl_struct *info ); +int mgsl_get_raw_rx_frame( struct mgsl_struct *info ); +void mgsl_reset_rx_dma_buffers( struct mgsl_struct *info ); +void mgsl_reset_tx_dma_buffers( struct mgsl_struct *info ); +int num_free_tx_dma_buffers(struct mgsl_struct *info); +void mgsl_load_tx_dma_buffer( struct mgsl_struct *info, const char *Buffer, unsigned int BufferSize); +void mgsl_load_pci_memory(char* TargetPtr, const char* SourcePtr, unsigned short count); + +/* + * DMA and Shared Memory buffer allocation and formatting + */ +int mgsl_allocate_dma_buffers(struct mgsl_struct *info); +void mgsl_free_dma_buffers(struct mgsl_struct *info); +int mgsl_alloc_frame_memory(struct mgsl_struct *info, DMABUFFERENTRY *BufferList,int Buffercount); +void mgsl_free_frame_memory(struct mgsl_struct *info, DMABUFFERENTRY *BufferList,int Buffercount); +int mgsl_alloc_buffer_list_memory(struct mgsl_struct *info); +void mgsl_free_buffer_list_memory(struct mgsl_struct *info); +int mgsl_alloc_intermediate_rxbuffer_memory(struct mgsl_struct *info); +void mgsl_free_intermediate_rxbuffer_memory(struct mgsl_struct *info); +int mgsl_alloc_intermediate_txbuffer_memory(struct mgsl_struct *info); +void mgsl_free_intermediate_txbuffer_memory(struct mgsl_struct *info); +int load_next_tx_holding_buffer(struct mgsl_struct *info); +int save_tx_buffer_request(struct mgsl_struct *info,const char *Buffer, unsigned int BufferSize); + +/* + * Bottom half interrupt handlers + */ +void mgsl_bh_handler(void* Context); +void mgsl_bh_receive(struct mgsl_struct *info); +void mgsl_bh_transmit(struct mgsl_struct *info); +void mgsl_bh_status(struct mgsl_struct *info); + +/* + * Interrupt handler routines and dispatch table. + */ +void mgsl_isr_null( struct mgsl_struct *info ); +void mgsl_isr_transmit_data( struct mgsl_struct *info ); +void mgsl_isr_receive_data( struct mgsl_struct *info ); +void mgsl_isr_receive_status( struct mgsl_struct *info ); +void mgsl_isr_transmit_status( struct mgsl_struct *info ); +void mgsl_isr_io_pin( struct mgsl_struct *info ); +void mgsl_isr_misc( struct mgsl_struct *info ); +void mgsl_isr_receive_dma( struct mgsl_struct *info ); +void mgsl_isr_transmit_dma( struct mgsl_struct *info ); + +typedef void (*isr_dispatch_func)(struct mgsl_struct *); + +isr_dispatch_func UscIsrTable[7] = +{ + mgsl_isr_null, + mgsl_isr_misc, + mgsl_isr_io_pin, + mgsl_isr_transmit_data, + mgsl_isr_transmit_status, + mgsl_isr_receive_data, + mgsl_isr_receive_status +}; + +/* + * ioctl call handlers + */ +static int set_modem_info(struct mgsl_struct * info, unsigned int cmd, + unsigned int *value); +static int get_modem_info(struct mgsl_struct * info, unsigned int *value); +static int mgsl_get_stats(struct mgsl_struct * info, struct mgsl_icount + *user_icount); +static int mgsl_get_params(struct mgsl_struct * info, MGSL_PARAMS *user_params); +static int mgsl_set_params(struct mgsl_struct * info, MGSL_PARAMS *new_params); +static int mgsl_get_txidle(struct mgsl_struct * info, int*idle_mode); +static int mgsl_set_txidle(struct mgsl_struct * info, int idle_mode); +static int mgsl_txenable(struct mgsl_struct * info, int enable); +static int mgsl_txabort(struct mgsl_struct * info); +static int mgsl_rxenable(struct mgsl_struct * info, int enable); +static int mgsl_wait_event(struct mgsl_struct * info, int * mask); +static int mgsl_loopmode_send_done( struct mgsl_struct * info ); + +#define jiffies_from_ms(a) ((((a) * HZ)/1000)+1) + +/* + * Global linked list of SyncLink devices + */ +struct mgsl_struct *mgsl_device_list; +int mgsl_device_count; + +/* + * Set this param to non-zero to load eax with the + * .text section address and breakpoint on module load. + * This is useful for use with gdb and add-symbol-file command. + */ +int break_on_load; + +/* + * Driver major number, defaults to zero to get auto + * assigned major number. May be forced as module parameter. + */ +int ttymajor; + +int cuamajor; + +/* + * Array of user specified options for ISA adapters. + */ +static int io[MAX_ISA_DEVICES]; +static int irq[MAX_ISA_DEVICES]; +static int dma[MAX_ISA_DEVICES]; +static int debug_level; +static int maxframe[MAX_TOTAL_DEVICES]; +static int dosyncppp[MAX_TOTAL_DEVICES]; +static int txdmabufs[MAX_TOTAL_DEVICES]; +static int txholdbufs[MAX_TOTAL_DEVICES]; + +MODULE_PARM(break_on_load,"i"); +MODULE_PARM(ttymajor,"i"); +MODULE_PARM(cuamajor,"i"); +MODULE_PARM(io,"1-" __MODULE_STRING(MAX_ISA_DEVICES) "i"); +MODULE_PARM(irq,"1-" __MODULE_STRING(MAX_ISA_DEVICES) "i"); +MODULE_PARM(dma,"1-" __MODULE_STRING(MAX_ISA_DEVICES) "i"); +MODULE_PARM(debug_level,"i"); +MODULE_PARM(maxframe,"1-" __MODULE_STRING(MAX_TOTAL_DEVICES) "i"); +MODULE_PARM(dosyncppp,"1-" __MODULE_STRING(MAX_TOTAL_DEVICES) "i"); +MODULE_PARM(txdmabufs,"1-" __MODULE_STRING(MAX_TOTAL_DEVICES) "i"); +MODULE_PARM(txholdbufs,"1-" __MODULE_STRING(MAX_TOTAL_DEVICES) "i"); + +static char *driver_name = "SyncLink serial driver"; +static char *driver_version = "$Revision: 3.12 $"; + +static int __init synclink_init_one (struct pci_dev *dev, + const struct pci_device_id *ent); +static void __exit synclink_remove_one (struct pci_dev *dev); + +static struct pci_device_id synclink_pci_tbl[] __devinitdata = { + { PCI_VENDOR_ID_MICROGATE, PCI_DEVICE_ID_MICROGATE_USC, PCI_ANY_ID, PCI_ANY_ID, }, + { 0, }, /* terminate list */ +}; +MODULE_DEVICE_TABLE(pci, synclink_pci_tbl); + +static struct pci_driver synclink_pci_driver = { + name: "synclink", + id_table: synclink_pci_tbl, + probe: synclink_init_one, + remove: synclink_remove_one, +}; + +static struct tty_driver serial_driver, callout_driver; +static int serial_refcount; + +/* number of characters left in xmit buffer before we ask for more */ +#define WAKEUP_CHARS 256 + + +static void mgsl_change_params(struct mgsl_struct *info); +static void mgsl_wait_until_sent(struct tty_struct *tty, int timeout); + +static struct tty_struct *serial_table[MAX_TOTAL_DEVICES]; +static struct termios *serial_termios[MAX_TOTAL_DEVICES]; +static struct termios *serial_termios_locked[MAX_TOTAL_DEVICES]; + +#ifndef MIN +#define MIN(a,b) ((a) < (b) ? (a) : (b)) +#endif + +/* + * 1st function defined in .text section. Calling this function in + * init_module() followed by a breakpoint allows a remote debugger + * (gdb) to get the .text address for the add-symbol-file command. + * This allows remote debugging of dynamically loadable modules. + */ +void* mgsl_get_text_ptr(void); +void* mgsl_get_text_ptr() {return mgsl_get_text_ptr;} + +/* + * tmp_buf is used as a temporary buffer by mgsl_write. We need to + * lock it in case the COPY_FROM_USER blocks while swapping in a page, + * and some other program tries to do a serial write at the same time. + * Since the lock will only come under contention when the system is + * swapping and available memory is low, it makes sense to share one + * buffer across all the serial ioports, since it significantly saves + * memory if large numbers of serial ports are open. + */ +static unsigned char *tmp_buf; +static DECLARE_MUTEX(tmp_buf_sem); + +static inline int mgsl_paranoia_check(struct mgsl_struct *info, + kdev_t device, const char *routine) +{ +#ifdef MGSL_PARANOIA_CHECK + static const char *badmagic = + "Warning: bad magic number for mgsl struct (%s) in %s\n"; + static const char *badinfo = + "Warning: null mgsl_struct for (%s) in %s\n"; + + if (!info) { + printk(badinfo, kdevname(device), routine); + return 1; + } + if (info->magic != MGSL_MAGIC) { + printk(badmagic, kdevname(device), routine); + return 1; + } +#endif + return 0; +} + +/* mgsl_stop() throttle (stop) transmitter + * + * Arguments: tty pointer to tty info structure + * Return Value: None + */ +static void mgsl_stop(struct tty_struct *tty) +{ + struct mgsl_struct *info = (struct mgsl_struct *)tty->driver_data; + unsigned long flags; + + if (mgsl_paranoia_check(info, tty->device, "mgsl_stop")) + return; + + if ( debug_level >= DEBUG_LEVEL_INFO ) + printk("mgsl_stop(%s)\n",info->device_name); + + spin_lock_irqsave(&info->irq_spinlock,flags); + if (info->tx_enabled) + usc_stop_transmitter(info); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + +} /* end of mgsl_stop() */ + +/* mgsl_start() release (start) transmitter + * + * Arguments: tty pointer to tty info structure + * Return Value: None + */ +static void mgsl_start(struct tty_struct *tty) +{ + struct mgsl_struct *info = (struct mgsl_struct *)tty->driver_data; + unsigned long flags; + + if (mgsl_paranoia_check(info, tty->device, "mgsl_start")) + return; + + if ( debug_level >= DEBUG_LEVEL_INFO ) + printk("mgsl_start(%s)\n",info->device_name); + + spin_lock_irqsave(&info->irq_spinlock,flags); + if (!info->tx_enabled) + usc_start_transmitter(info); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + +} /* end of mgsl_start() */ + +/* + * Bottom half work queue access functions + */ + +/* mgsl_bh_action() Return next bottom half action to perform. + * Return Value: BH action code or 0 if nothing to do. + */ +int mgsl_bh_action(struct mgsl_struct *info) +{ + unsigned long flags; + int rc = 0; + + spin_lock_irqsave(&info->irq_spinlock,flags); + + if (info->pending_bh & BH_RECEIVE) { + info->pending_bh &= ~BH_RECEIVE; + rc = BH_RECEIVE; + } else if (info->pending_bh & BH_TRANSMIT) { + info->pending_bh &= ~BH_TRANSMIT; + rc = BH_TRANSMIT; + } else if (info->pending_bh & BH_STATUS) { + info->pending_bh &= ~BH_STATUS; + rc = BH_STATUS; + } + + if (!rc) { + /* Mark BH routine as complete */ + info->bh_running = 0; + info->bh_requested = 0; + } + + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + return rc; +} + +/* + * Perform bottom half processing of work items queued by ISR. + */ +void mgsl_bh_handler(void* Context) +{ + struct mgsl_struct *info = (struct mgsl_struct*)Context; + int action; + + if (!info) + return; + + if ( debug_level >= DEBUG_LEVEL_BH ) + printk( "%s(%d):mgsl_bh_handler(%s) entry\n", + __FILE__,__LINE__,info->device_name); + + info->bh_running = 1; + + while((action = mgsl_bh_action(info)) != 0) { + + /* Process work item */ + if ( debug_level >= DEBUG_LEVEL_BH ) + printk( "%s(%d):mgsl_bh_handler() work item action=%d\n", + __FILE__,__LINE__,action); + + switch (action) { + + case BH_RECEIVE: + mgsl_bh_receive(info); + break; + case BH_TRANSMIT: + mgsl_bh_transmit(info); + break; + case BH_STATUS: + mgsl_bh_status(info); + break; + default: + /* unknown work item ID */ + printk("Unknown work item ID=%08X!\n", action); + break; + } + } + + if ( debug_level >= DEBUG_LEVEL_BH ) + printk( "%s(%d):mgsl_bh_handler(%s) exit\n", + __FILE__,__LINE__,info->device_name); +} + +void mgsl_bh_receive(struct mgsl_struct *info) +{ + int (*get_rx_frame)(struct mgsl_struct *info) = + (info->params.mode == MGSL_MODE_HDLC ? mgsl_get_rx_frame : mgsl_get_raw_rx_frame); + + if ( debug_level >= DEBUG_LEVEL_BH ) + printk( "%s(%d):mgsl_bh_receive(%s)\n", + __FILE__,__LINE__,info->device_name); + + while( (get_rx_frame)(info) ); +} + +void mgsl_bh_transmit(struct mgsl_struct *info) +{ + struct tty_struct *tty = info->tty; + unsigned long flags; + + if ( debug_level >= DEBUG_LEVEL_BH ) + printk( "%s(%d):mgsl_bh_transmit() entry on %s\n", + __FILE__,__LINE__,info->device_name); + + if (tty) { + if ((tty->flags & (1 << TTY_DO_WRITE_WAKEUP)) && + tty->ldisc.write_wakeup) { + if ( debug_level >= DEBUG_LEVEL_BH ) + printk( "%s(%d):calling ldisc.write_wakeup on %s\n", + __FILE__,__LINE__,info->device_name); + (tty->ldisc.write_wakeup)(tty); + } + wake_up_interruptible(&tty->write_wait); + } + + /* if transmitter idle and loopmode_send_done_requested + * then start echoing RxD to TxD + */ + spin_lock_irqsave(&info->irq_spinlock,flags); + if ( !info->tx_active && info->loopmode_send_done_requested ) + usc_loopmode_send_done( info ); + spin_unlock_irqrestore(&info->irq_spinlock,flags); +} + +void mgsl_bh_status(struct mgsl_struct *info) +{ + if ( debug_level >= DEBUG_LEVEL_BH ) + printk( "%s(%d):mgsl_bh_status() entry on %s\n", + __FILE__,__LINE__,info->device_name); + + info->ri_chkcount = 0; + info->dsr_chkcount = 0; + info->dcd_chkcount = 0; + info->cts_chkcount = 0; +} + +/* mgsl_isr_receive_status() + * + * Service a receive status interrupt. The type of status + * interrupt is indicated by the state of the RCSR. + * This is only used for HDLC mode. + * + * Arguments: info pointer to device instance data + * Return Value: None + */ +void mgsl_isr_receive_status( struct mgsl_struct *info ) +{ + u16 status = usc_InReg( info, RCSR ); + + if ( debug_level >= DEBUG_LEVEL_ISR ) + printk("%s(%d):mgsl_isr_receive_status status=%04X\n", + __FILE__,__LINE__,status); + + if ( (status & RXSTATUS_ABORT_RECEIVED) && + info->loopmode_insert_requested && + usc_loopmode_active(info) ) + { + ++info->icount.rxabort; + info->loopmode_insert_requested = FALSE; + + /* clear CMR:13 to start echoing RxD to TxD */ + info->cmr_value &= ~BIT13; + usc_OutReg(info, CMR, info->cmr_value); + + /* disable received abort irq (no longer required) */ + usc_OutReg(info, RICR, + (usc_InReg(info, RICR) & ~RXSTATUS_ABORT_RECEIVED)); + } + + if (status & (RXSTATUS_EXITED_HUNT + RXSTATUS_IDLE_RECEIVED)) { + if (status & RXSTATUS_EXITED_HUNT) + info->icount.exithunt++; + if (status & RXSTATUS_IDLE_RECEIVED) + info->icount.rxidle++; + wake_up_interruptible(&info->event_wait_q); + } + + if (status & RXSTATUS_OVERRUN){ + info->icount.rxover++; + usc_process_rxoverrun_sync( info ); + } + + usc_ClearIrqPendingBits( info, RECEIVE_STATUS ); + usc_UnlatchRxstatusBits( info, status ); + +} /* end of mgsl_isr_receive_status() */ + +/* mgsl_isr_transmit_status() + * + * Service a transmit status interrupt + * HDLC mode :end of transmit frame + * Async mode:all data is sent + * transmit status is indicated by bits in the TCSR. + * + * Arguments: info pointer to device instance data + * Return Value: None + */ +void mgsl_isr_transmit_status( struct mgsl_struct *info ) +{ + u16 status = usc_InReg( info, TCSR ); + + if ( debug_level >= DEBUG_LEVEL_ISR ) + printk("%s(%d):mgsl_isr_transmit_status status=%04X\n", + __FILE__,__LINE__,status); + + usc_ClearIrqPendingBits( info, TRANSMIT_STATUS ); + usc_UnlatchTxstatusBits( info, status ); + + if ( status & (TXSTATUS_UNDERRUN | TXSTATUS_ABORT_SENT) ) + { + /* finished sending HDLC abort. This may leave */ + /* the TxFifo with data from the aborted frame */ + /* so purge the TxFifo. Also shutdown the DMA */ + /* channel in case there is data remaining in */ + /* the DMA buffer */ + usc_DmaCmd( info, DmaCmd_ResetTxChannel ); + usc_RTCmd( info, RTCmd_PurgeTxFifo ); + } + + if ( status & TXSTATUS_EOF_SENT ) + info->icount.txok++; + else if ( status & TXSTATUS_UNDERRUN ) + info->icount.txunder++; + else if ( status & TXSTATUS_ABORT_SENT ) + info->icount.txabort++; + else + info->icount.txunder++; + + info->tx_active = 0; + info->xmit_cnt = info->xmit_head = info->xmit_tail = 0; + del_timer(&info->tx_timer); + + if ( info->drop_rts_on_tx_done ) { + usc_get_serial_signals( info ); + if ( info->serial_signals & SerialSignal_RTS ) { + info->serial_signals &= ~SerialSignal_RTS; + usc_set_serial_signals( info ); + } + info->drop_rts_on_tx_done = 0; + } + +#ifdef CONFIG_SYNCLINK_SYNCPPP + if (info->netcount) + mgsl_sppp_tx_done(info); + else +#endif + { + if (info->tty->stopped || info->tty->hw_stopped) { + usc_stop_transmitter(info); + return; + } + info->pending_bh |= BH_TRANSMIT; + } + +} /* end of mgsl_isr_transmit_status() */ + +/* mgsl_isr_io_pin() + * + * Service an Input/Output pin interrupt. The type of + * interrupt is indicated by bits in the MISR + * + * Arguments: info pointer to device instance data + * Return Value: None + */ +void mgsl_isr_io_pin( struct mgsl_struct *info ) +{ + struct mgsl_icount *icount; + u16 status = usc_InReg( info, MISR ); + + if ( debug_level >= DEBUG_LEVEL_ISR ) + printk("%s(%d):mgsl_isr_io_pin status=%04X\n", + __FILE__,__LINE__,status); + + usc_ClearIrqPendingBits( info, IO_PIN ); + usc_UnlatchIostatusBits( info, status ); + + if (status & (MISCSTATUS_CTS_LATCHED | MISCSTATUS_DCD_LATCHED | + MISCSTATUS_DSR_LATCHED | MISCSTATUS_RI_LATCHED) ) { + icount = &info->icount; + /* update input line counters */ + if (status & MISCSTATUS_RI_LATCHED) { + if ((info->ri_chkcount)++ >= IO_PIN_SHUTDOWN_LIMIT) + usc_DisablestatusIrqs(info,SICR_RI); + icount->rng++; + if ( status & MISCSTATUS_RI ) + info->input_signal_events.ri_up++; + else + info->input_signal_events.ri_down++; + } + if (status & MISCSTATUS_DSR_LATCHED) { + if ((info->dsr_chkcount)++ >= IO_PIN_SHUTDOWN_LIMIT) + usc_DisablestatusIrqs(info,SICR_DSR); + icount->dsr++; + if ( status & MISCSTATUS_DSR ) + info->input_signal_events.dsr_up++; + else + info->input_signal_events.dsr_down++; + } + if (status & MISCSTATUS_DCD_LATCHED) { + if ((info->dcd_chkcount)++ >= IO_PIN_SHUTDOWN_LIMIT) + usc_DisablestatusIrqs(info,SICR_DCD); + icount->dcd++; + if (status & MISCSTATUS_DCD) { + info->input_signal_events.dcd_up++; +#ifdef CONFIG_SYNCLINK_SYNCPPP + if (info->netcount) + sppp_reopen(info->netdev); +#endif + } else + info->input_signal_events.dcd_down++; + } + if (status & MISCSTATUS_CTS_LATCHED) + { + if ((info->cts_chkcount)++ >= IO_PIN_SHUTDOWN_LIMIT) + usc_DisablestatusIrqs(info,SICR_CTS); + icount->cts++; + if ( status & MISCSTATUS_CTS ) + info->input_signal_events.cts_up++; + else + info->input_signal_events.cts_down++; + } + wake_up_interruptible(&info->status_event_wait_q); + wake_up_interruptible(&info->event_wait_q); + + if ( (info->flags & ASYNC_CHECK_CD) && + (status & MISCSTATUS_DCD_LATCHED) ) { + if ( debug_level >= DEBUG_LEVEL_ISR ) + printk("%s CD now %s...", info->device_name, + (status & MISCSTATUS_DCD) ? "on" : "off"); + if (status & MISCSTATUS_DCD) + wake_up_interruptible(&info->open_wait); + else if (!((info->flags & ASYNC_CALLOUT_ACTIVE) && + (info->flags & ASYNC_CALLOUT_NOHUP))) { + if ( debug_level >= DEBUG_LEVEL_ISR ) + printk("doing serial hangup..."); + if (info->tty) + tty_hangup(info->tty); + } + } + + if ( (info->flags & ASYNC_CTS_FLOW) && + (status & MISCSTATUS_CTS_LATCHED) ) { + if (info->tty->hw_stopped) { + if (status & MISCSTATUS_CTS) { + if ( debug_level >= DEBUG_LEVEL_ISR ) + printk("CTS tx start..."); + if (info->tty) + info->tty->hw_stopped = 0; + usc_start_transmitter(info); + info->pending_bh |= BH_TRANSMIT; + return; + } + } else { + if (!(status & MISCSTATUS_CTS)) { + if ( debug_level >= DEBUG_LEVEL_ISR ) + printk("CTS tx stop..."); + if (info->tty) + info->tty->hw_stopped = 1; + usc_stop_transmitter(info); + } + } + } + } + + info->pending_bh |= BH_STATUS; + + /* for diagnostics set IRQ flag */ + if ( status & MISCSTATUS_TXC_LATCHED ){ + usc_OutReg( info, SICR, + (unsigned short)(usc_InReg(info,SICR) & ~(SICR_TXC_ACTIVE+SICR_TXC_INACTIVE)) ); + usc_UnlatchIostatusBits( info, MISCSTATUS_TXC_LATCHED ); + info->irq_occurred = 1; + } + +} /* end of mgsl_isr_io_pin() */ + +/* mgsl_isr_transmit_data() + * + * Service a transmit data interrupt (async mode only). + * + * Arguments: info pointer to device instance data + * Return Value: None + */ +void mgsl_isr_transmit_data( struct mgsl_struct *info ) +{ + if ( debug_level >= DEBUG_LEVEL_ISR ) + printk("%s(%d):mgsl_isr_transmit_data xmit_cnt=%d\n", + __FILE__,__LINE__,info->xmit_cnt); + + usc_ClearIrqPendingBits( info, TRANSMIT_DATA ); + + if (info->tty->stopped || info->tty->hw_stopped) { + usc_stop_transmitter(info); + return; + } + + if ( info->xmit_cnt ) + usc_load_txfifo( info ); + else + info->tx_active = 0; + + if (info->xmit_cnt < WAKEUP_CHARS) + info->pending_bh |= BH_TRANSMIT; + +} /* end of mgsl_isr_transmit_data() */ + +/* mgsl_isr_receive_data() + * + * Service a receive data interrupt. This occurs + * when operating in asynchronous interrupt transfer mode. + * The receive data FIFO is flushed to the receive data buffers. + * + * Arguments: info pointer to device instance data + * Return Value: None + */ +void mgsl_isr_receive_data( struct mgsl_struct *info ) +{ + int Fifocount; + u16 status; + unsigned char DataByte; + struct tty_struct *tty = info->tty; + struct mgsl_icount *icount = &info->icount; + + if ( debug_level >= DEBUG_LEVEL_ISR ) + printk("%s(%d):mgsl_isr_receive_data\n", + __FILE__,__LINE__); + + usc_ClearIrqPendingBits( info, RECEIVE_DATA ); + + /* select FIFO status for RICR readback */ + usc_RCmd( info, RCmd_SelectRicrRxFifostatus ); + + /* clear the Wordstatus bit so that status readback */ + /* only reflects the status of this byte */ + usc_OutReg( info, RICR+LSBONLY, (u16)(usc_InReg(info, RICR+LSBONLY) & ~BIT3 )); + + /* flush the receive FIFO */ + + while( (Fifocount = (usc_InReg(info,RICR) >> 8)) ) { + /* read one byte from RxFIFO */ + outw( (inw(info->io_base + CCAR) & 0x0780) | (RDR+LSBONLY), + info->io_base + CCAR ); + DataByte = inb( info->io_base + CCAR ); + + /* get the status of the received byte */ + status = usc_InReg(info, RCSR); + if ( status & (RXSTATUS_FRAMING_ERROR + RXSTATUS_PARITY_ERROR + + RXSTATUS_OVERRUN + RXSTATUS_BREAK_RECEIVED) ) + usc_UnlatchRxstatusBits(info,RXSTATUS_ALL); + + if (tty->flip.count >= TTY_FLIPBUF_SIZE) + continue; + + *tty->flip.char_buf_ptr = DataByte; + icount->rx++; + + *tty->flip.flag_buf_ptr = 0; + if ( status & (RXSTATUS_FRAMING_ERROR + RXSTATUS_PARITY_ERROR + + RXSTATUS_OVERRUN + RXSTATUS_BREAK_RECEIVED) ) { + printk("rxerr=%04X\n",status); + /* update error statistics */ + if ( status & RXSTATUS_BREAK_RECEIVED ) { + status &= ~(RXSTATUS_FRAMING_ERROR + RXSTATUS_PARITY_ERROR); + icount->brk++; + } else if (status & RXSTATUS_PARITY_ERROR) + icount->parity++; + else if (status & RXSTATUS_FRAMING_ERROR) + icount->frame++; + else if (status & RXSTATUS_OVERRUN) { + /* must issue purge fifo cmd before */ + /* 16C32 accepts more receive chars */ + usc_RTCmd(info,RTCmd_PurgeRxFifo); + icount->overrun++; + } + + /* discard char if tty control flags say so */ + if (status & info->ignore_status_mask) + continue; + + status &= info->read_status_mask; + + if (status & RXSTATUS_BREAK_RECEIVED) { + *tty->flip.flag_buf_ptr = TTY_BREAK; + if (info->flags & ASYNC_SAK) + do_SAK(tty); + } else if (status & RXSTATUS_PARITY_ERROR) + *tty->flip.flag_buf_ptr = TTY_PARITY; + else if (status & RXSTATUS_FRAMING_ERROR) + *tty->flip.flag_buf_ptr = TTY_FRAME; + if (status & RXSTATUS_OVERRUN) { + /* Overrun is special, since it's + * reported immediately, and doesn't + * affect the current character + */ + if (tty->flip.count < TTY_FLIPBUF_SIZE) { + tty->flip.count++; + tty->flip.flag_buf_ptr++; + tty->flip.char_buf_ptr++; + *tty->flip.flag_buf_ptr = TTY_OVERRUN; + } + } + } /* end of if (error) */ + + tty->flip.flag_buf_ptr++; + tty->flip.char_buf_ptr++; + tty->flip.count++; + } + + if ( debug_level >= DEBUG_LEVEL_ISR ) { + printk("%s(%d):mgsl_isr_receive_data flip count=%d\n", + __FILE__,__LINE__,tty->flip.count); + printk("%s(%d):rx=%d brk=%d parity=%d frame=%d overrun=%d\n", + __FILE__,__LINE__,icount->rx,icount->brk, + icount->parity,icount->frame,icount->overrun); + } + + if ( tty->flip.count ) + tty_flip_buffer_push(tty); +} + +/* mgsl_isr_misc() + * + * Service a miscellaneos interrupt source. + * + * Arguments: info pointer to device extension (instance data) + * Return Value: None + */ +void mgsl_isr_misc( struct mgsl_struct *info ) +{ + u16 status = usc_InReg( info, MISR ); + + if ( debug_level >= DEBUG_LEVEL_ISR ) + printk("%s(%d):mgsl_isr_misc status=%04X\n", + __FILE__,__LINE__,status); + + usc_ClearIrqPendingBits( info, MISC ); + usc_UnlatchMiscstatusBits( info, status ); + +} /* end of mgsl_isr_misc() */ + +/* mgsl_isr_null() + * + * Services undefined interrupt vectors from the + * USC. (hence this function SHOULD never be called) + * + * Arguments: info pointer to device extension (instance data) + * Return Value: None + */ +void mgsl_isr_null( struct mgsl_struct *info ) +{ + +} /* end of mgsl_isr_null() */ + +/* mgsl_isr_receive_dma() + * + * Service a receive DMA channel interrupt. + * For this driver there are two sources of receive DMA interrupts + * as identified in the Receive DMA mode Register (RDMR): + * + * BIT3 EOA/EOL End of List, all receive buffers in receive + * buffer list have been filled (no more free buffers + * available). The DMA controller has shut down. + * + * BIT2 EOB End of Buffer. This interrupt occurs when a receive + * DMA buffer is terminated in response to completion + * of a good frame or a frame with errors. The status + * of the frame is stored in the buffer entry in the + * list of receive buffer entries. + * + * Arguments: info pointer to device instance data + * Return Value: None + */ +void mgsl_isr_receive_dma( struct mgsl_struct *info ) +{ + u16 status; + + /* clear interrupt pending and IUS bit for Rx DMA IRQ */ + usc_OutDmaReg( info, CDIR, BIT9+BIT1 ); + + /* Read the receive DMA status to identify interrupt type. */ + /* This also clears the status bits. */ + status = usc_InDmaReg( info, RDMR ); + + if ( debug_level >= DEBUG_LEVEL_ISR ) + printk("%s(%d):mgsl_isr_receive_dma(%s) status=%04X\n", + __FILE__,__LINE__,info->device_name,status); + + info->pending_bh |= BH_RECEIVE; + + if ( status & BIT3 ) { + info->rx_overflow = 1; + info->icount.buf_overrun++; + } + +} /* end of mgsl_isr_receive_dma() */ + +/* mgsl_isr_transmit_dma() + * + * This function services a transmit DMA channel interrupt. + * + * For this driver there is one source of transmit DMA interrupts + * as identified in the Transmit DMA Mode Register (TDMR): + * + * BIT2 EOB End of Buffer. This interrupt occurs when a + * transmit DMA buffer has been emptied. + * + * The driver maintains enough transmit DMA buffers to hold at least + * one max frame size transmit frame. When operating in a buffered + * transmit mode, there may be enough transmit DMA buffers to hold at + * least two or more max frame size frames. On an EOB condition, + * determine if there are any queued transmit buffers and copy into + * transmit DMA buffers if we have room. + * + * Arguments: info pointer to device instance data + * Return Value: None + */ +void mgsl_isr_transmit_dma( struct mgsl_struct *info ) +{ + u16 status; + + /* clear interrupt pending and IUS bit for Tx DMA IRQ */ + usc_OutDmaReg(info, CDIR, BIT8+BIT0 ); + + /* Read the transmit DMA status to identify interrupt type. */ + /* This also clears the status bits. */ + + status = usc_InDmaReg( info, TDMR ); + + if ( debug_level >= DEBUG_LEVEL_ISR ) + printk("%s(%d):mgsl_isr_transmit_dma(%s) status=%04X\n", + __FILE__,__LINE__,info->device_name,status); + + if ( status & BIT2 ) { + --info->tx_dma_buffers_used; + + /* if there are transmit frames queued, + * try to load the next one + */ + if ( load_next_tx_holding_buffer(info) ) { + /* if call returns non-zero value, we have + * at least one free tx holding buffer + */ + info->pending_bh |= BH_TRANSMIT; + } + } + +} /* end of mgsl_isr_transmit_dma() */ + +/* mgsl_interrupt() + * + * Interrupt service routine entry point. + * + * Arguments: + * + * irq interrupt number that caused interrupt + * dev_id device ID supplied during interrupt registration + * regs interrupted processor context + * + * Return Value: None + */ +static void mgsl_interrupt(int irq, void *dev_id, struct pt_regs * regs) +{ + struct mgsl_struct * info; + u16 UscVector; + u16 DmaVector; + + if ( debug_level >= DEBUG_LEVEL_ISR ) + printk("%s(%d):mgsl_interrupt(%d)entry.\n", + __FILE__,__LINE__,irq); + + info = (struct mgsl_struct *)dev_id; + if (!info) + return; + + spin_lock(&info->irq_spinlock); + + for(;;) { + /* Read the interrupt vectors from hardware. */ + UscVector = usc_InReg(info, IVR) >> 9; + DmaVector = usc_InDmaReg(info, DIVR); + + if ( debug_level >= DEBUG_LEVEL_ISR ) + printk("%s(%d):%s UscVector=%08X DmaVector=%08X\n", + __FILE__,__LINE__,info->device_name,UscVector,DmaVector); + + if ( !UscVector && !DmaVector ) + break; + + /* Dispatch interrupt vector */ + if ( UscVector ) + (*UscIsrTable[UscVector])(info); + else if ( (DmaVector&(BIT10|BIT9)) == BIT10) + mgsl_isr_transmit_dma(info); + else + mgsl_isr_receive_dma(info); + + if ( info->isr_overflow ) { + printk(KERN_ERR"%s(%d):%s isr overflow irq=%d\n", + __FILE__,__LINE__,info->device_name, irq); + usc_DisableMasterIrqBit(info); + usc_DisableDmaInterrupts(info,DICR_MASTER); + break; + } + } + + /* Request bottom half processing if there's something + * for it to do and the bh is not already running + */ + + if ( info->pending_bh && !info->bh_running && !info->bh_requested ) { + if ( debug_level >= DEBUG_LEVEL_ISR ) + printk("%s(%d):%s queueing bh task.\n", + __FILE__,__LINE__,info->device_name); + queue_task(&info->task, &tq_immediate); + mark_bh(IMMEDIATE_BH); + info->bh_requested = 1; + } + + spin_unlock(&info->irq_spinlock); + + if ( debug_level >= DEBUG_LEVEL_ISR ) + printk("%s(%d):mgsl_interrupt(%d)exit.\n", + __FILE__,__LINE__,irq); + +} /* end of mgsl_interrupt() */ + +/* startup() + * + * Initialize and start device. + * + * Arguments: info pointer to device instance data + * Return Value: 0 if success, otherwise error code + */ +static int startup(struct mgsl_struct * info) +{ + int retval = 0; + + if ( debug_level >= DEBUG_LEVEL_INFO ) + printk("%s(%d):mgsl_startup(%s)\n",__FILE__,__LINE__,info->device_name); + + if (info->flags & ASYNC_INITIALIZED) + return 0; + + if (!info->xmit_buf) { + /* allocate a page of memory for a transmit buffer */ + info->xmit_buf = (unsigned char *)get_free_page(GFP_KERNEL); + if (!info->xmit_buf) { + printk(KERN_ERR"%s(%d):%s can't allocate transmit buffer\n", + __FILE__,__LINE__,info->device_name); + return -ENOMEM; + } + } + + info->pending_bh = 0; + + init_timer(&info->tx_timer); + info->tx_timer.data = (unsigned long)info; + info->tx_timer.function = mgsl_tx_timeout; + + /* Allocate and claim adapter resources */ + retval = mgsl_claim_resources(info); + + /* perform existance check and diagnostics */ + if ( !retval ) + retval = mgsl_adapter_test(info); + + if ( retval ) { + if (capable(CAP_SYS_ADMIN) && info->tty) + set_bit(TTY_IO_ERROR, &info->tty->flags); + mgsl_release_resources(info); + return retval; + } + + /* program hardware for current parameters */ + mgsl_change_params(info); + + if (info->tty) + clear_bit(TTY_IO_ERROR, &info->tty->flags); + + info->flags |= ASYNC_INITIALIZED; + + return 0; + +} /* end of startup() */ + +/* shutdown() + * + * Called by mgsl_close() and mgsl_hangup() to shutdown hardware + * + * Arguments: info pointer to device instance data + * Return Value: None + */ +static void shutdown(struct mgsl_struct * info) +{ + unsigned long flags; + + if (!(info->flags & ASYNC_INITIALIZED)) + return; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_shutdown(%s)\n", + __FILE__,__LINE__, info->device_name ); + + /* clear status wait queue because status changes */ + /* can't happen after shutting down the hardware */ + wake_up_interruptible(&info->status_event_wait_q); + wake_up_interruptible(&info->event_wait_q); + + del_timer(&info->tx_timer); + + if (info->xmit_buf) { + free_page((unsigned long) info->xmit_buf); + info->xmit_buf = 0; + } + + spin_lock_irqsave(&info->irq_spinlock,flags); + usc_DisableMasterIrqBit(info); + usc_stop_receiver(info); + usc_stop_transmitter(info); + usc_DisableInterrupts(info,RECEIVE_DATA + RECEIVE_STATUS + + TRANSMIT_DATA + TRANSMIT_STATUS + IO_PIN + MISC ); + usc_DisableDmaInterrupts(info,DICR_MASTER + DICR_TRANSMIT + DICR_RECEIVE); + + /* Disable DMAEN (Port 7, Bit 14) */ + /* This disconnects the DMA request signal from the ISA bus */ + /* on the ISA adapter. This has no effect for the PCI adapter */ + usc_OutReg(info, PCR, (u16)((usc_InReg(info, PCR) | BIT15) | BIT14)); + + /* Disable INTEN (Port 6, Bit12) */ + /* This disconnects the IRQ request signal to the ISA bus */ + /* on the ISA adapter. This has no effect for the PCI adapter */ + usc_OutReg(info, PCR, (u16)((usc_InReg(info, PCR) | BIT13) | BIT12)); + + if (!info->tty || info->tty->termios->c_cflag & HUPCL) { + info->serial_signals &= ~(SerialSignal_DTR + SerialSignal_RTS); + usc_set_serial_signals(info); + } + + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + mgsl_release_resources(info); + + if (info->tty) + set_bit(TTY_IO_ERROR, &info->tty->flags); + + info->flags &= ~ASYNC_INITIALIZED; + +} /* end of shutdown() */ + +static void mgsl_program_hw(struct mgsl_struct *info) +{ + unsigned long flags; + + spin_lock_irqsave(&info->irq_spinlock,flags); + + usc_stop_receiver(info); + usc_stop_transmitter(info); + info->xmit_cnt = info->xmit_head = info->xmit_tail = 0; + + if (info->params.mode == MGSL_MODE_HDLC || + info->params.mode == MGSL_MODE_RAW || + info->netcount) + usc_set_sync_mode(info); + else + usc_set_async_mode(info); + + usc_set_serial_signals(info); + + info->dcd_chkcount = 0; + info->cts_chkcount = 0; + info->ri_chkcount = 0; + info->dsr_chkcount = 0; + + usc_EnableStatusIrqs(info,SICR_CTS+SICR_DSR+SICR_DCD+SICR_RI); + usc_EnableInterrupts(info, IO_PIN); + usc_get_serial_signals(info); + + if (info->netcount || info->tty->termios->c_cflag & CREAD) + usc_start_receiver(info); + + spin_unlock_irqrestore(&info->irq_spinlock,flags); +} + +/* Reconfigure adapter based on new parameters + */ +static void mgsl_change_params(struct mgsl_struct *info) +{ + unsigned cflag; + int bits_per_char; + + if (!info->tty || !info->tty->termios) + return; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_change_params(%s)\n", + __FILE__,__LINE__, info->device_name ); + + cflag = info->tty->termios->c_cflag; + + /* if B0 rate (hangup) specified then negate DTR and RTS */ + /* otherwise assert DTR and RTS */ + if (cflag & CBAUD) + info->serial_signals |= SerialSignal_RTS + SerialSignal_DTR; + else + info->serial_signals &= ~(SerialSignal_RTS + SerialSignal_DTR); + + /* byte size and parity */ + + switch (cflag & CSIZE) { + case CS5: info->params.data_bits = 5; break; + case CS6: info->params.data_bits = 6; break; + case CS7: info->params.data_bits = 7; break; + case CS8: info->params.data_bits = 8; break; + /* Never happens, but GCC is too dumb to figure it out */ + default: info->params.data_bits = 7; break; + } + + if (cflag & CSTOPB) + info->params.stop_bits = 2; + else + info->params.stop_bits = 1; + + info->params.parity = ASYNC_PARITY_NONE; + if (cflag & PARENB) { + if (cflag & PARODD) + info->params.parity = ASYNC_PARITY_ODD; + else + info->params.parity = ASYNC_PARITY_EVEN; +#ifdef CMSPAR + if (cflag & CMSPAR) + info->params.parity = ASYNC_PARITY_SPACE; +#endif + } + + /* calculate number of jiffies to transmit a full + * FIFO (32 bytes) at specified data rate + */ + bits_per_char = info->params.data_bits + + info->params.stop_bits + 1; + + /* if port data rate is set to 460800 or less then + * allow tty settings to override, otherwise keep the + * current data rate. + */ + if (info->params.data_rate <= 460800) + info->params.data_rate = tty_get_baud_rate(info->tty); + + if ( info->params.data_rate ) { + info->timeout = (32*HZ*bits_per_char) / + info->params.data_rate; + } + info->timeout += HZ/50; /* Add .02 seconds of slop */ + + if (cflag & CRTSCTS) + info->flags |= ASYNC_CTS_FLOW; + else + info->flags &= ~ASYNC_CTS_FLOW; + + if (cflag & CLOCAL) + info->flags &= ~ASYNC_CHECK_CD; + else + info->flags |= ASYNC_CHECK_CD; + + /* process tty input control flags */ + + info->read_status_mask = RXSTATUS_OVERRUN; + if (I_INPCK(info->tty)) + info->read_status_mask |= RXSTATUS_PARITY_ERROR | RXSTATUS_FRAMING_ERROR; + if (I_BRKINT(info->tty) || I_PARMRK(info->tty)) + info->read_status_mask |= RXSTATUS_BREAK_RECEIVED; + + if (I_IGNPAR(info->tty)) + info->ignore_status_mask |= RXSTATUS_PARITY_ERROR | RXSTATUS_FRAMING_ERROR; + if (I_IGNBRK(info->tty)) { + info->ignore_status_mask |= RXSTATUS_BREAK_RECEIVED; + /* If ignoring parity and break indicators, ignore + * overruns too. (For real raw support). + */ + if (I_IGNPAR(info->tty)) + info->ignore_status_mask |= RXSTATUS_OVERRUN; + } + + mgsl_program_hw(info); + +} /* end of mgsl_change_params() */ + +/* mgsl_put_char() + * + * Add a character to the transmit buffer. + * + * Arguments: tty pointer to tty information structure + * ch character to add to transmit buffer + * + * Return Value: None + */ +static void mgsl_put_char(struct tty_struct *tty, unsigned char ch) +{ + struct mgsl_struct *info = (struct mgsl_struct *)tty->driver_data; + unsigned long flags; + + if ( debug_level >= DEBUG_LEVEL_INFO ) { + printk( "%s(%d):mgsl_put_char(%d) on %s\n", + __FILE__,__LINE__,ch,info->device_name); + } + + if (mgsl_paranoia_check(info, tty->device, "mgsl_put_char")) + return; + + if (!tty || !info->xmit_buf) + return; + + spin_lock_irqsave(&info->irq_spinlock,flags); + + if ( (info->params.mode == MGSL_MODE_ASYNC ) || !info->tx_active ) { + + if (info->xmit_cnt < SERIAL_XMIT_SIZE - 1) { + info->xmit_buf[info->xmit_head++] = ch; + info->xmit_head &= SERIAL_XMIT_SIZE-1; + info->xmit_cnt++; + } + } + + spin_unlock_irqrestore(&info->irq_spinlock,flags); + +} /* end of mgsl_put_char() */ + +/* mgsl_flush_chars() + * + * Enable transmitter so remaining characters in the + * transmit buffer are sent. + * + * Arguments: tty pointer to tty information structure + * Return Value: None + */ +static void mgsl_flush_chars(struct tty_struct *tty) +{ + struct mgsl_struct *info = (struct mgsl_struct *)tty->driver_data; + unsigned long flags; + + if ( debug_level >= DEBUG_LEVEL_INFO ) + printk( "%s(%d):mgsl_flush_chars() entry on %s xmit_cnt=%d\n", + __FILE__,__LINE__,info->device_name,info->xmit_cnt); + + if (mgsl_paranoia_check(info, tty->device, "mgsl_flush_chars")) + return; + + if (info->xmit_cnt <= 0 || tty->stopped || tty->hw_stopped || + !info->xmit_buf) + return; + + if ( debug_level >= DEBUG_LEVEL_INFO ) + printk( "%s(%d):mgsl_flush_chars() entry on %s starting transmitter\n", + __FILE__,__LINE__,info->device_name ); + + spin_lock_irqsave(&info->irq_spinlock,flags); + + if (!info->tx_active) { + if ( (info->params.mode == MGSL_MODE_HDLC || + info->params.mode == MGSL_MODE_RAW) && info->xmit_cnt ) { + /* operating in synchronous (frame oriented) mode */ + /* copy data from circular xmit_buf to */ + /* transmit DMA buffer. */ + mgsl_load_tx_dma_buffer(info, + info->xmit_buf,info->xmit_cnt); + } + usc_start_transmitter(info); + } + + spin_unlock_irqrestore(&info->irq_spinlock,flags); + +} /* end of mgsl_flush_chars() */ + +/* mgsl_write() + * + * Send a block of data + * + * Arguments: + * + * tty pointer to tty information structure + * from_user flag: 1 = from user process + * buf pointer to buffer containing send data + * count size of send data in bytes + * + * Return Value: number of characters written + */ +static int mgsl_write(struct tty_struct * tty, int from_user, + const unsigned char *buf, int count) +{ + int c, ret = 0, err; + struct mgsl_struct *info = (struct mgsl_struct *)tty->driver_data; + unsigned long flags; + + if ( debug_level >= DEBUG_LEVEL_INFO ) + printk( "%s(%d):mgsl_write(%s) count=%d\n", + __FILE__,__LINE__,info->device_name,count); + + if (mgsl_paranoia_check(info, tty->device, "mgsl_write")) + goto cleanup; + + if (!tty || !info->xmit_buf || !tmp_buf) + goto cleanup; + + if ( info->params.mode == MGSL_MODE_HDLC || + info->params.mode == MGSL_MODE_RAW ) { + /* operating in synchronous (frame oriented) mode */ + /* operating in synchronous (frame oriented) mode */ + if (info->tx_active) { + + if ( info->params.mode == MGSL_MODE_HDLC ) { + ret = 0; + goto cleanup; + } + /* transmitter is actively sending data - + * if we have multiple transmit dma and + * holding buffers, attempt to queue this + * frame for transmission at a later time. + */ + if (info->tx_holding_count >= info->num_tx_holding_buffers ) { + /* no tx holding buffers available */ + ret = 0; + goto cleanup; + } + + /* queue transmit frame request */ + ret = count; + if (from_user) { + down(&tmp_buf_sem); + COPY_FROM_USER(err,tmp_buf, buf, count); + if (err) { + if ( debug_level >= DEBUG_LEVEL_INFO ) + printk( "%s(%d):mgsl_write(%s) sync user buf copy failed\n", + __FILE__,__LINE__,info->device_name); + ret = -EFAULT; + } else + save_tx_buffer_request(info,tmp_buf,count); + up(&tmp_buf_sem); + } + else + save_tx_buffer_request(info,buf,count); + + /* if we have sufficient tx dma buffers, + * load the next buffered tx request + */ + spin_lock_irqsave(&info->irq_spinlock,flags); + load_next_tx_holding_buffer(info); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + goto cleanup; + } + + /* if operating in HDLC LoopMode and the adapter */ + /* has yet to be inserted into the loop, we can't */ + /* transmit */ + + if ( (info->params.flags & HDLC_FLAG_HDLC_LOOPMODE) && + !usc_loopmode_active(info) ) + { + ret = 0; + goto cleanup; + } + + if ( info->xmit_cnt ) { + /* Send accumulated from send_char() calls */ + /* as frame and wait before accepting more data. */ + ret = 0; + + /* copy data from circular xmit_buf to */ + /* transmit DMA buffer. */ + mgsl_load_tx_dma_buffer(info, + info->xmit_buf,info->xmit_cnt); + if ( debug_level >= DEBUG_LEVEL_INFO ) + printk( "%s(%d):mgsl_write(%s) sync xmit_cnt flushing\n", + __FILE__,__LINE__,info->device_name); + } else { + if ( debug_level >= DEBUG_LEVEL_INFO ) + printk( "%s(%d):mgsl_write(%s) sync transmit accepted\n", + __FILE__,__LINE__,info->device_name); + ret = count; + info->xmit_cnt = count; + if (from_user) { + down(&tmp_buf_sem); + COPY_FROM_USER(err,tmp_buf, buf, count); + if (err) { + if ( debug_level >= DEBUG_LEVEL_INFO ) + printk( "%s(%d):mgsl_write(%s) sync user buf copy failed\n", + __FILE__,__LINE__,info->device_name); + ret = -EFAULT; + } else + mgsl_load_tx_dma_buffer(info,tmp_buf,count); + up(&tmp_buf_sem); + } + else + mgsl_load_tx_dma_buffer(info,buf,count); + } + } else { + if (from_user) { + down(&tmp_buf_sem); + while (1) { + c = MIN(count, + MIN(SERIAL_XMIT_SIZE - info->xmit_cnt - 1, + SERIAL_XMIT_SIZE - info->xmit_head)); + if (c <= 0) + break; + + COPY_FROM_USER(err,tmp_buf, buf, c); + c -= err; + if (!c) { + if (!ret) + ret = -EFAULT; + break; + } + spin_lock_irqsave(&info->irq_spinlock,flags); + c = MIN(c, MIN(SERIAL_XMIT_SIZE - info->xmit_cnt - 1, + SERIAL_XMIT_SIZE - info->xmit_head)); + memcpy(info->xmit_buf + info->xmit_head, tmp_buf, c); + info->xmit_head = ((info->xmit_head + c) & + (SERIAL_XMIT_SIZE-1)); + info->xmit_cnt += c; + spin_unlock_irqrestore(&info->irq_spinlock,flags); + buf += c; + count -= c; + ret += c; + } + up(&tmp_buf_sem); + } else { + while (1) { + spin_lock_irqsave(&info->irq_spinlock,flags); + c = MIN(count, + MIN(SERIAL_XMIT_SIZE - info->xmit_cnt - 1, + SERIAL_XMIT_SIZE - info->xmit_head)); + if (c <= 0) { + spin_unlock_irqrestore(&info->irq_spinlock,flags); + break; + } + memcpy(info->xmit_buf + info->xmit_head, buf, c); + info->xmit_head = ((info->xmit_head + c) & + (SERIAL_XMIT_SIZE-1)); + info->xmit_cnt += c; + spin_unlock_irqrestore(&info->irq_spinlock,flags); + buf += c; + count -= c; + ret += c; + } + } + } + + if (info->xmit_cnt && !tty->stopped && !tty->hw_stopped) { + spin_lock_irqsave(&info->irq_spinlock,flags); + if (!info->tx_active) + usc_start_transmitter(info); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + } +cleanup: + if ( debug_level >= DEBUG_LEVEL_INFO ) + printk( "%s(%d):mgsl_write(%s) returning=%d\n", + __FILE__,__LINE__,info->device_name,ret); + + return ret; + +} /* end of mgsl_write() */ + +/* mgsl_write_room() + * + * Return the count of free bytes in transmit buffer + * + * Arguments: tty pointer to tty info structure + * Return Value: None + */ +static int mgsl_write_room(struct tty_struct *tty) +{ + struct mgsl_struct *info = (struct mgsl_struct *)tty->driver_data; + int ret; + + if (mgsl_paranoia_check(info, tty->device, "mgsl_write_room")) + return 0; + ret = SERIAL_XMIT_SIZE - info->xmit_cnt - 1; + if (ret < 0) + ret = 0; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_write_room(%s)=%d\n", + __FILE__,__LINE__, info->device_name,ret ); + + if ( info->params.mode == MGSL_MODE_HDLC || + info->params.mode == MGSL_MODE_RAW ) { + /* operating in synchronous (frame oriented) mode */ + if ( info->tx_active ) + return 0; + else + return HDLC_MAX_FRAME_SIZE; + } + + return ret; + +} /* end of mgsl_write_room() */ + +/* mgsl_chars_in_buffer() + * + * Return the count of bytes in transmit buffer + * + * Arguments: tty pointer to tty info structure + * Return Value: None + */ +static int mgsl_chars_in_buffer(struct tty_struct *tty) +{ + struct mgsl_struct *info = (struct mgsl_struct *)tty->driver_data; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_chars_in_buffer(%s)\n", + __FILE__,__LINE__, info->device_name ); + + if (mgsl_paranoia_check(info, tty->device, "mgsl_chars_in_buffer")) + return 0; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_chars_in_buffer(%s)=%d\n", + __FILE__,__LINE__, info->device_name,info->xmit_cnt ); + + if ( info->params.mode == MGSL_MODE_HDLC || + info->params.mode == MGSL_MODE_RAW ) { + /* operating in synchronous (frame oriented) mode */ + if ( info->tx_active ) + return info->max_frame_size; + else + return 0; + } + + return info->xmit_cnt; +} /* end of mgsl_chars_in_buffer() */ + +/* mgsl_flush_buffer() + * + * Discard all data in the send buffer + * + * Arguments: tty pointer to tty info structure + * Return Value: None + */ +static void mgsl_flush_buffer(struct tty_struct *tty) +{ + struct mgsl_struct *info = (struct mgsl_struct *)tty->driver_data; + unsigned long flags; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_flush_buffer(%s) entry\n", + __FILE__,__LINE__, info->device_name ); + + if (mgsl_paranoia_check(info, tty->device, "mgsl_flush_buffer")) + return; + + spin_lock_irqsave(&info->irq_spinlock,flags); + info->xmit_cnt = info->xmit_head = info->xmit_tail = 0; + del_timer(&info->tx_timer); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + wake_up_interruptible(&tty->write_wait); + if ((tty->flags & (1 << TTY_DO_WRITE_WAKEUP)) && + tty->ldisc.write_wakeup) + (tty->ldisc.write_wakeup)(tty); + +} /* end of mgsl_flush_buffer() */ + +/* mgsl_send_xchar() + * + * Send a high-priority XON/XOFF character + * + * Arguments: tty pointer to tty info structure + * ch character to send + * Return Value: None + */ +static void mgsl_send_xchar(struct tty_struct *tty, char ch) +{ + struct mgsl_struct *info = (struct mgsl_struct *)tty->driver_data; + unsigned long flags; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_send_xchar(%s,%d)\n", + __FILE__,__LINE__, info->device_name, ch ); + + if (mgsl_paranoia_check(info, tty->device, "mgsl_send_xchar")) + return; + + info->x_char = ch; + if (ch) { + /* Make sure transmit interrupts are on */ + spin_lock_irqsave(&info->irq_spinlock,flags); + if (!info->tx_enabled) + usc_start_transmitter(info); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + } +} /* end of mgsl_send_xchar() */ + +/* mgsl_throttle() + * + * Signal remote device to throttle send data (our receive data) + * + * Arguments: tty pointer to tty info structure + * Return Value: None + */ +static void mgsl_throttle(struct tty_struct * tty) +{ + struct mgsl_struct *info = (struct mgsl_struct *)tty->driver_data; + unsigned long flags; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_throttle(%s) entry\n", + __FILE__,__LINE__, info->device_name ); + + if (mgsl_paranoia_check(info, tty->device, "mgsl_throttle")) + return; + + if (I_IXOFF(tty)) + mgsl_send_xchar(tty, STOP_CHAR(tty)); + + if (tty->termios->c_cflag & CRTSCTS) { + spin_lock_irqsave(&info->irq_spinlock,flags); + info->serial_signals &= ~SerialSignal_RTS; + usc_set_serial_signals(info); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + } +} /* end of mgsl_throttle() */ + +/* mgsl_unthrottle() + * + * Signal remote device to stop throttling send data (our receive data) + * + * Arguments: tty pointer to tty info structure + * Return Value: None + */ +static void mgsl_unthrottle(struct tty_struct * tty) +{ + struct mgsl_struct *info = (struct mgsl_struct *)tty->driver_data; + unsigned long flags; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_unthrottle(%s) entry\n", + __FILE__,__LINE__, info->device_name ); + + if (mgsl_paranoia_check(info, tty->device, "mgsl_unthrottle")) + return; + + if (I_IXOFF(tty)) { + if (info->x_char) + info->x_char = 0; + else + mgsl_send_xchar(tty, START_CHAR(tty)); + } + + if (tty->termios->c_cflag & CRTSCTS) { + spin_lock_irqsave(&info->irq_spinlock,flags); + info->serial_signals |= SerialSignal_RTS; + usc_set_serial_signals(info); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + } + +} /* end of mgsl_unthrottle() */ + +/* mgsl_get_stats() + * + * get the current serial parameters information + * + * Arguments: info pointer to device instance data + * user_icount pointer to buffer to hold returned stats + * + * Return Value: 0 if success, otherwise error code + */ +static int mgsl_get_stats(struct mgsl_struct * info, struct mgsl_icount *user_icount) +{ + int err; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_get_params(%s)\n", + __FILE__,__LINE__, info->device_name); + + COPY_TO_USER(err,user_icount, &info->icount, sizeof(struct mgsl_icount)); + if (err) { + if ( debug_level >= DEBUG_LEVEL_INFO ) + printk( "%s(%d):mgsl_get_stats(%s) user buffer copy failed\n", + __FILE__,__LINE__,info->device_name); + return -EFAULT; + } + + return 0; + +} /* end of mgsl_get_stats() */ + +/* mgsl_get_params() + * + * get the current serial parameters information + * + * Arguments: info pointer to device instance data + * user_params pointer to buffer to hold returned params + * + * Return Value: 0 if success, otherwise error code + */ +static int mgsl_get_params(struct mgsl_struct * info, MGSL_PARAMS *user_params) +{ + int err; + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_get_params(%s)\n", + __FILE__,__LINE__, info->device_name); + + COPY_TO_USER(err,user_params, &info->params, sizeof(MGSL_PARAMS)); + if (err) { + if ( debug_level >= DEBUG_LEVEL_INFO ) + printk( "%s(%d):mgsl_get_params(%s) user buffer copy failed\n", + __FILE__,__LINE__,info->device_name); + return -EFAULT; + } + + return 0; + +} /* end of mgsl_get_params() */ + +/* mgsl_set_params() + * + * set the serial parameters + * + * Arguments: + * + * info pointer to device instance data + * new_params user buffer containing new serial params + * + * Return Value: 0 if success, otherwise error code + */ +static int mgsl_set_params(struct mgsl_struct * info, MGSL_PARAMS *new_params) +{ + unsigned long flags; + MGSL_PARAMS tmp_params; + int err; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_set_params %s\n", __FILE__,__LINE__, + info->device_name ); + COPY_FROM_USER(err,&tmp_params, new_params, sizeof(MGSL_PARAMS)); + if (err) { + if ( debug_level >= DEBUG_LEVEL_INFO ) + printk( "%s(%d):mgsl_set_params(%s) user buffer copy failed\n", + __FILE__,__LINE__,info->device_name); + return -EFAULT; + } + + spin_lock_irqsave(&info->irq_spinlock,flags); + memcpy(&info->params,&tmp_params,sizeof(MGSL_PARAMS)); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + mgsl_change_params(info); + + return 0; + +} /* end of mgsl_set_params() */ + +/* mgsl_get_txidle() + * + * get the current transmit idle mode + * + * Arguments: info pointer to device instance data + * idle_mode pointer to buffer to hold returned idle mode + * + * Return Value: 0 if success, otherwise error code + */ +static int mgsl_get_txidle(struct mgsl_struct * info, int*idle_mode) +{ + int err; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_get_txidle(%s)=%d\n", + __FILE__,__LINE__, info->device_name, info->idle_mode); + + COPY_TO_USER(err,idle_mode, &info->idle_mode, sizeof(int)); + if (err) { + if ( debug_level >= DEBUG_LEVEL_INFO ) + printk( "%s(%d):mgsl_get_txidle(%s) user buffer copy failed\n", + __FILE__,__LINE__,info->device_name); + return -EFAULT; + } + + return 0; + +} /* end of mgsl_get_txidle() */ + +/* mgsl_set_txidle() service ioctl to set transmit idle mode + * + * Arguments: info pointer to device instance data + * idle_mode new idle mode + * + * Return Value: 0 if success, otherwise error code + */ +static int mgsl_set_txidle(struct mgsl_struct * info, int idle_mode) +{ + unsigned long flags; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_set_txidle(%s,%d)\n", __FILE__,__LINE__, + info->device_name, idle_mode ); + + spin_lock_irqsave(&info->irq_spinlock,flags); + info->idle_mode = idle_mode; + usc_set_txidle( info ); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + return 0; + +} /* end of mgsl_set_txidle() */ + +/* mgsl_txenable() + * + * enable or disable the transmitter + * + * Arguments: + * + * info pointer to device instance data + * enable 1 = enable, 0 = disable + * + * Return Value: 0 if success, otherwise error code + */ +static int mgsl_txenable(struct mgsl_struct * info, int enable) +{ + unsigned long flags; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_txenable(%s,%d)\n", __FILE__,__LINE__, + info->device_name, enable); + + spin_lock_irqsave(&info->irq_spinlock,flags); + if ( enable ) { + if ( !info->tx_enabled ) { + + usc_start_transmitter(info); + /*-------------------------------------------------- + * if HDLC/SDLC Loop mode, attempt to insert the + * station in the 'loop' by setting CMR:13. Upon + * receipt of the next GoAhead (RxAbort) sequence, + * the OnLoop indicator (CCSR:7) should go active + * to indicate that we are on the loop + *--------------------------------------------------*/ + if ( info->params.flags & HDLC_FLAG_HDLC_LOOPMODE ) + usc_loopmode_insert_request( info ); + } + } else { + if ( info->tx_enabled ) + usc_stop_transmitter(info); + } + spin_unlock_irqrestore(&info->irq_spinlock,flags); + return 0; + +} /* end of mgsl_txenable() */ + +/* mgsl_txabort() abort send HDLC frame + * + * Arguments: info pointer to device instance data + * Return Value: 0 if success, otherwise error code + */ +static int mgsl_txabort(struct mgsl_struct * info) +{ + unsigned long flags; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_txabort(%s)\n", __FILE__,__LINE__, + info->device_name); + + spin_lock_irqsave(&info->irq_spinlock,flags); + if ( info->tx_active && info->params.mode == MGSL_MODE_HDLC ) + { + if ( info->params.flags & HDLC_FLAG_HDLC_LOOPMODE ) + usc_loopmode_cancel_transmit( info ); + else + usc_TCmd(info,TCmd_SendAbort); + } + spin_unlock_irqrestore(&info->irq_spinlock,flags); + return 0; + +} /* end of mgsl_txabort() */ + +/* mgsl_rxenable() enable or disable the receiver + * + * Arguments: info pointer to device instance data + * enable 1 = enable, 0 = disable + * Return Value: 0 if success, otherwise error code + */ +static int mgsl_rxenable(struct mgsl_struct * info, int enable) +{ + unsigned long flags; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_rxenable(%s,%d)\n", __FILE__,__LINE__, + info->device_name, enable); + + spin_lock_irqsave(&info->irq_spinlock,flags); + if ( enable ) { + if ( !info->rx_enabled ) + usc_start_receiver(info); + } else { + if ( info->rx_enabled ) + usc_stop_receiver(info); + } + spin_unlock_irqrestore(&info->irq_spinlock,flags); + return 0; + +} /* end of mgsl_rxenable() */ + +/* mgsl_wait_event() wait for specified event to occur + * + * Arguments: info pointer to device instance data + * mask pointer to bitmask of events to wait for + * Return Value: 0 if successful and bit mask updated with + * of events triggerred, + * otherwise error code + */ +static int mgsl_wait_event(struct mgsl_struct * info, int * mask_ptr) +{ + unsigned long flags; + int s; + int rc=0; + struct mgsl_icount cprev, cnow; + int events; + int mask; + struct _input_signal_events oldsigs, newsigs; + DECLARE_WAITQUEUE(wait, current); + + COPY_FROM_USER(rc,&mask, mask_ptr, sizeof(int)); + if (rc) { + return -EFAULT; + } + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_wait_event(%s,%d)\n", __FILE__,__LINE__, + info->device_name, mask); + + spin_lock_irqsave(&info->irq_spinlock,flags); + + /* return immediately if state matches requested events */ + usc_get_serial_signals(info); + s = info->serial_signals; + events = mask & + ( ((s & SerialSignal_DSR) ? MgslEvent_DsrActive:MgslEvent_DsrInactive) + + ((s & SerialSignal_DCD) ? MgslEvent_DcdActive:MgslEvent_DcdInactive) + + ((s & SerialSignal_CTS) ? MgslEvent_CtsActive:MgslEvent_CtsInactive) + + ((s & SerialSignal_RI) ? MgslEvent_RiActive :MgslEvent_RiInactive) ); + if (events) { + spin_unlock_irqrestore(&info->irq_spinlock,flags); + goto exit; + } + + /* save current irq counts */ + cprev = info->icount; + oldsigs = info->input_signal_events; + + /* enable hunt and idle irqs if needed */ + if (mask & (MgslEvent_ExitHuntMode + MgslEvent_IdleReceived)) { + u16 oldreg = usc_InReg(info,RICR); + u16 newreg = oldreg + + (mask & MgslEvent_ExitHuntMode ? RXSTATUS_EXITED_HUNT:0) + + (mask & MgslEvent_IdleReceived ? RXSTATUS_IDLE_RECEIVED:0); + if (oldreg != newreg) + usc_OutReg(info, RICR, newreg); + } + + set_current_state(TASK_INTERRUPTIBLE); + add_wait_queue(&info->event_wait_q, &wait); + + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + + for(;;) { + schedule(); + if (signal_pending(current)) { + rc = -ERESTARTSYS; + break; + } + + /* get current irq counts */ + spin_lock_irqsave(&info->irq_spinlock,flags); + cnow = info->icount; + newsigs = info->input_signal_events; + set_current_state(TASK_INTERRUPTIBLE); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + /* if no change, wait aborted for some reason */ + if (newsigs.dsr_up == oldsigs.dsr_up && + newsigs.dsr_down == oldsigs.dsr_down && + newsigs.dcd_up == oldsigs.dcd_up && + newsigs.dcd_down == oldsigs.dcd_down && + newsigs.cts_up == oldsigs.cts_up && + newsigs.cts_down == oldsigs.cts_down && + newsigs.ri_up == oldsigs.ri_up && + newsigs.ri_down == oldsigs.ri_down && + cnow.exithunt == cprev.exithunt && + cnow.rxidle == cprev.rxidle) { + rc = -EIO; + break; + } + + events = mask & + ( (newsigs.dsr_up != oldsigs.dsr_up ? MgslEvent_DsrActive:0) + + (newsigs.dsr_down != oldsigs.dsr_down ? MgslEvent_DsrInactive:0) + + (newsigs.dcd_up != oldsigs.dcd_up ? MgslEvent_DcdActive:0) + + (newsigs.dcd_down != oldsigs.dcd_down ? MgslEvent_DcdInactive:0) + + (newsigs.cts_up != oldsigs.cts_up ? MgslEvent_CtsActive:0) + + (newsigs.cts_down != oldsigs.cts_down ? MgslEvent_CtsInactive:0) + + (newsigs.ri_up != oldsigs.ri_up ? MgslEvent_RiActive:0) + + (newsigs.ri_down != oldsigs.ri_down ? MgslEvent_RiInactive:0) + + (cnow.exithunt != cprev.exithunt ? MgslEvent_ExitHuntMode:0) + + (cnow.rxidle != cprev.rxidle ? MgslEvent_IdleReceived:0) ); + if (events) + break; + + cprev = cnow; + oldsigs = newsigs; + } + + remove_wait_queue(&info->event_wait_q, &wait); + set_current_state(TASK_RUNNING); + + if (mask & (MgslEvent_ExitHuntMode + MgslEvent_IdleReceived)) { + spin_lock_irqsave(&info->irq_spinlock,flags); + if (!waitqueue_active(&info->event_wait_q)) { + /* disable enable exit hunt mode/idle rcvd IRQs */ + usc_OutReg(info, RICR, usc_InReg(info,RICR) & + ~(RXSTATUS_EXITED_HUNT + RXSTATUS_IDLE_RECEIVED)); + } + spin_unlock_irqrestore(&info->irq_spinlock,flags); + } +exit: + if ( rc == 0 ) + PUT_USER(rc, events, mask_ptr); + + return rc; + +} /* end of mgsl_wait_event() */ + +static int modem_input_wait(struct mgsl_struct *info,int arg) +{ + unsigned long flags; + int rc; + struct mgsl_icount cprev, cnow; + DECLARE_WAITQUEUE(wait, current); + + /* save current irq counts */ + spin_lock_irqsave(&info->irq_spinlock,flags); + cprev = info->icount; + add_wait_queue(&info->status_event_wait_q, &wait); + set_current_state(TASK_INTERRUPTIBLE); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + for(;;) { + schedule(); + if (signal_pending(current)) { + rc = -ERESTARTSYS; + break; + } + + /* get new irq counts */ + spin_lock_irqsave(&info->irq_spinlock,flags); + cnow = info->icount; + set_current_state(TASK_INTERRUPTIBLE); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + /* if no change, wait aborted for some reason */ + if (cnow.rng == cprev.rng && cnow.dsr == cprev.dsr && + cnow.dcd == cprev.dcd && cnow.cts == cprev.cts) { + rc = -EIO; + break; + } + + /* check for change in caller specified modem input */ + if ((arg & TIOCM_RNG && cnow.rng != cprev.rng) || + (arg & TIOCM_DSR && cnow.dsr != cprev.dsr) || + (arg & TIOCM_CD && cnow.dcd != cprev.dcd) || + (arg & TIOCM_CTS && cnow.cts != cprev.cts)) { + rc = 0; + break; + } + + cprev = cnow; + } + remove_wait_queue(&info->status_event_wait_q, &wait); + set_current_state(TASK_RUNNING); + return rc; +} + +/* get_modem_info() + * + * Read the state of the serial control and + * status signals and return to caller. + * + * Arguments: info pointer to device instance data + * value pointer to int to hold returned info + * + * Return Value: 0 if success, otherwise error code + */ +static int get_modem_info(struct mgsl_struct * info, unsigned int *value) +{ + unsigned int result = 0; + unsigned long flags; + int err; + + spin_lock_irqsave(&info->irq_spinlock,flags); + usc_get_serial_signals(info); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + if (info->serial_signals & SerialSignal_RTS) + result |= TIOCM_RTS; + if (info->serial_signals & SerialSignal_DTR) + result |= TIOCM_DTR; + if (info->serial_signals & SerialSignal_DCD) + result |= TIOCM_CAR; + if (info->serial_signals & SerialSignal_RI) + result |= TIOCM_RNG; + if (info->serial_signals & SerialSignal_DSR) + result |= TIOCM_DSR; + if (info->serial_signals & SerialSignal_CTS) + result |= TIOCM_CTS; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_get_modem_info %s value=%08X\n", + __FILE__,__LINE__, info->device_name, result ); + + PUT_USER(err,result,value); + return err; +} /* end of get_modem_info() */ + +/* set_modem_info() + * + * Set the state of the modem control signals (DTR/RTS) + * + * Arguments: + * + * info pointer to device instance data + * cmd signal command: TIOCMBIS = set bit TIOCMBIC = clear bit + * TIOCMSET = set/clear signal values + * value bit mask for command + * + * Return Value: 0 if success, otherwise error code + */ +static int set_modem_info(struct mgsl_struct * info, unsigned int cmd, + unsigned int *value) +{ + int error; + unsigned int arg; + unsigned long flags; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_set_modem_info %s\n", __FILE__,__LINE__, + info->device_name ); + + GET_USER(error,arg,value); + if (error) + return error; + + switch (cmd) { + case TIOCMBIS: + if (arg & TIOCM_RTS) + info->serial_signals |= SerialSignal_RTS; + if (arg & TIOCM_DTR) + info->serial_signals |= SerialSignal_DTR; + break; + case TIOCMBIC: + if (arg & TIOCM_RTS) + info->serial_signals &= ~SerialSignal_RTS; + if (arg & TIOCM_DTR) + info->serial_signals &= ~SerialSignal_DTR; + break; + case TIOCMSET: + if (arg & TIOCM_RTS) + info->serial_signals |= SerialSignal_RTS; + else + info->serial_signals &= ~SerialSignal_RTS; + + if (arg & TIOCM_DTR) + info->serial_signals |= SerialSignal_DTR; + else + info->serial_signals &= ~SerialSignal_DTR; + break; + default: + return -EINVAL; + } + + spin_lock_irqsave(&info->irq_spinlock,flags); + usc_set_serial_signals(info); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + return 0; + +} /* end of set_modem_info() */ + +/* mgsl_break() Set or clear transmit break condition + * + * Arguments: tty pointer to tty instance data + * break_state -1=set break condition, 0=clear + * Return Value: None + */ +static void mgsl_break(struct tty_struct *tty, int break_state) +{ + struct mgsl_struct * info = (struct mgsl_struct *)tty->driver_data; + unsigned long flags; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_break(%s,%d)\n", + __FILE__,__LINE__, info->device_name, break_state); + + if (mgsl_paranoia_check(info, tty->device, "mgsl_break")) + return; + + spin_lock_irqsave(&info->irq_spinlock,flags); + if (break_state == -1) + usc_OutReg(info,IOCR,(u16)(usc_InReg(info,IOCR) | BIT7)); + else + usc_OutReg(info,IOCR,(u16)(usc_InReg(info,IOCR) & ~BIT7)); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + +} /* end of mgsl_break() */ + +/* mgsl_ioctl() Service an IOCTL request + * + * Arguments: + * + * tty pointer to tty instance data + * file pointer to associated file object for device + * cmd IOCTL command code + * arg command argument/context + * + * Return Value: 0 if success, otherwise error code + */ +static int mgsl_ioctl(struct tty_struct *tty, struct file * file, + unsigned int cmd, unsigned long arg) +{ + struct mgsl_struct * info = (struct mgsl_struct *)tty->driver_data; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_ioctl %s cmd=%08X\n", __FILE__,__LINE__, + info->device_name, cmd ); + + if (mgsl_paranoia_check(info, tty->device, "mgsl_ioctl")) + return -ENODEV; + + if ((cmd != TIOCGSERIAL) && (cmd != TIOCSSERIAL) && + (cmd != TIOCMIWAIT) && (cmd != TIOCGICOUNT)) { + if (tty->flags & (1 << TTY_IO_ERROR)) + return -EIO; + } + + return mgsl_ioctl_common(info, cmd, arg); +} + +int mgsl_ioctl_common(struct mgsl_struct *info, unsigned int cmd, unsigned long arg) +{ + int error; + struct mgsl_icount cnow; /* kernel counter temps */ + struct serial_icounter_struct *p_cuser; /* user space */ + unsigned long flags; + + switch (cmd) { + case TIOCMGET: + return get_modem_info(info, (unsigned int *) arg); + case TIOCMBIS: + case TIOCMBIC: + case TIOCMSET: + return set_modem_info(info, cmd, (unsigned int *) arg); + case MGSL_IOCGPARAMS: + return mgsl_get_params(info,(MGSL_PARAMS *)arg); + case MGSL_IOCSPARAMS: + return mgsl_set_params(info,(MGSL_PARAMS *)arg); + case MGSL_IOCGTXIDLE: + return mgsl_get_txidle(info,(int*)arg); + case MGSL_IOCSTXIDLE: + return mgsl_set_txidle(info,(int)arg); + case MGSL_IOCTXENABLE: + return mgsl_txenable(info,(int)arg); + case MGSL_IOCRXENABLE: + return mgsl_rxenable(info,(int)arg); + case MGSL_IOCTXABORT: + return mgsl_txabort(info); + case MGSL_IOCGSTATS: + return mgsl_get_stats(info,(struct mgsl_icount*)arg); + case MGSL_IOCWAITEVENT: + return mgsl_wait_event(info,(int*)arg); + case MGSL_IOCLOOPTXDONE: + return mgsl_loopmode_send_done(info); + case MGSL_IOCCLRMODCOUNT: + while(MOD_IN_USE) + MOD_DEC_USE_COUNT; + return 0; + + /* Wait for modem input (DCD,RI,DSR,CTS) change + * as specified by mask in arg (TIOCM_RNG/DSR/CD/CTS) + */ + case TIOCMIWAIT: + return modem_input_wait(info,(int)arg); + + /* + * Get counter of input serial line interrupts (DCD,RI,DSR,CTS) + * Return: write counters to the user passed counter struct + * NB: both 1->0 and 0->1 transitions are counted except for + * RI where only 0->1 is counted. + */ + case TIOCGICOUNT: + spin_lock_irqsave(&info->irq_spinlock,flags); + cnow = info->icount; + spin_unlock_irqrestore(&info->irq_spinlock,flags); + p_cuser = (struct serial_icounter_struct *) arg; + PUT_USER(error,cnow.cts, &p_cuser->cts); + if (error) return error; + PUT_USER(error,cnow.dsr, &p_cuser->dsr); + if (error) return error; + PUT_USER(error,cnow.rng, &p_cuser->rng); + if (error) return error; + PUT_USER(error,cnow.dcd, &p_cuser->dcd); + if (error) return error; + PUT_USER(error,cnow.rx, &p_cuser->rx); + if (error) return error; + PUT_USER(error,cnow.tx, &p_cuser->tx); + if (error) return error; + PUT_USER(error,cnow.frame, &p_cuser->frame); + if (error) return error; + PUT_USER(error,cnow.overrun, &p_cuser->overrun); + if (error) return error; + PUT_USER(error,cnow.parity, &p_cuser->parity); + if (error) return error; + PUT_USER(error,cnow.brk, &p_cuser->brk); + if (error) return error; + PUT_USER(error,cnow.buf_overrun, &p_cuser->buf_overrun); + if (error) return error; + return 0; + default: + return -ENOIOCTLCMD; + } + return 0; +} + +/* mgsl_set_termios() + * + * Set new termios settings + * + * Arguments: + * + * tty pointer to tty structure + * termios pointer to buffer to hold returned old termios + * + * Return Value: None + */ +static void mgsl_set_termios(struct tty_struct *tty, struct termios *old_termios) +{ + struct mgsl_struct *info = (struct mgsl_struct *)tty->driver_data; + unsigned long flags; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_set_termios %s\n", __FILE__,__LINE__, + tty->driver.name ); + + /* just return if nothing has changed */ + if ((tty->termios->c_cflag == old_termios->c_cflag) + && (RELEVANT_IFLAG(tty->termios->c_iflag) + == RELEVANT_IFLAG(old_termios->c_iflag))) + return; + + mgsl_change_params(info); + + /* Handle transition to B0 status */ + if (old_termios->c_cflag & CBAUD && + !(tty->termios->c_cflag & CBAUD)) { + info->serial_signals &= ~(SerialSignal_RTS + SerialSignal_DTR); + spin_lock_irqsave(&info->irq_spinlock,flags); + usc_set_serial_signals(info); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + } + + /* Handle transition away from B0 status */ + if (!(old_termios->c_cflag & CBAUD) && + tty->termios->c_cflag & CBAUD) { + info->serial_signals |= SerialSignal_DTR; + if (!(tty->termios->c_cflag & CRTSCTS) || + !test_bit(TTY_THROTTLED, &tty->flags)) { + info->serial_signals |= SerialSignal_RTS; + } + spin_lock_irqsave(&info->irq_spinlock,flags); + usc_set_serial_signals(info); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + } + + /* Handle turning off CRTSCTS */ + if (old_termios->c_cflag & CRTSCTS && + !(tty->termios->c_cflag & CRTSCTS)) { + tty->hw_stopped = 0; + mgsl_start(tty); + } + +} /* end of mgsl_set_termios() */ + +/* mgsl_close() + * + * Called when port is closed. Wait for remaining data to be + * sent. Disable port and free resources. + * + * Arguments: + * + * tty pointer to open tty structure + * filp pointer to open file object + * + * Return Value: None + */ +static void mgsl_close(struct tty_struct *tty, struct file * filp) +{ + struct mgsl_struct * info = (struct mgsl_struct *)tty->driver_data; + + if (!info || mgsl_paranoia_check(info, tty->device, "mgsl_close")) + return; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_close(%s) entry, count=%d\n", + __FILE__,__LINE__, info->device_name, info->count); + + if (!info->count || tty_hung_up_p(filp)) + goto cleanup; + + if ((tty->count == 1) && (info->count != 1)) { + /* + * tty->count is 1 and the tty structure will be freed. + * info->count should be one in this case. + * if it's not, correct it so that the port is shutdown. + */ + printk("mgsl_close: bad refcount; tty->count is 1, " + "info->count is %d\n", info->count); + info->count = 1; + } + + info->count--; + + /* if at least one open remaining, leave hardware active */ + if (info->count) + goto cleanup; + + info->flags |= ASYNC_CLOSING; + + /* Save the termios structure, since this port may have + * separate termios for callout and dialin. + */ + if (info->flags & ASYNC_NORMAL_ACTIVE) + info->normal_termios = *tty->termios; + if (info->flags & ASYNC_CALLOUT_ACTIVE) + info->callout_termios = *tty->termios; + + /* set tty->closing to notify line discipline to + * only process XON/XOFF characters. Only the N_TTY + * discipline appears to use this (ppp does not). + */ + tty->closing = 1; + + /* wait for transmit data to clear all layers */ + + if (info->closing_wait != ASYNC_CLOSING_WAIT_NONE) { + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_close(%s) calling tty_wait_until_sent\n", + __FILE__,__LINE__, info->device_name ); + tty_wait_until_sent(tty, info->closing_wait); + } + + if (info->flags & ASYNC_INITIALIZED) + mgsl_wait_until_sent(tty, info->timeout); + + if (tty->driver.flush_buffer) + tty->driver.flush_buffer(tty); + + if (tty->ldisc.flush_buffer) + tty->ldisc.flush_buffer(tty); + + shutdown(info); + + tty->closing = 0; + info->tty = 0; + + if (info->blocked_open) { + if (info->close_delay) { + set_current_state(TASK_INTERRUPTIBLE); + schedule_timeout(info->close_delay); + } + wake_up_interruptible(&info->open_wait); + } + + info->flags &= ~(ASYNC_NORMAL_ACTIVE|ASYNC_CALLOUT_ACTIVE| + ASYNC_CLOSING); + + wake_up_interruptible(&info->close_wait); + +cleanup: + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_close(%s) exit, count=%d\n", __FILE__,__LINE__, + tty->driver.name, info->count); + if(MOD_IN_USE) + MOD_DEC_USE_COUNT; + +} /* end of mgsl_close() */ + +/* mgsl_wait_until_sent() + * + * Wait until the transmitter is empty. + * + * Arguments: + * + * tty pointer to tty info structure + * timeout time to wait for send completion + * + * Return Value: None + */ +static void mgsl_wait_until_sent(struct tty_struct *tty, int timeout) +{ + struct mgsl_struct * info = (struct mgsl_struct *)tty->driver_data; + unsigned long orig_jiffies, char_time; + + if (!info ) + return; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_wait_until_sent(%s) entry\n", + __FILE__,__LINE__, info->device_name ); + + if (mgsl_paranoia_check(info, tty->device, "mgsl_wait_until_sent")) + return; + + if (!(info->flags & ASYNC_INITIALIZED)) + goto exit; + + orig_jiffies = jiffies; + + /* Set check interval to 1/5 of estimated time to + * send a character, and make it at least 1. The check + * interval should also be less than the timeout. + * Note: use tight timings here to satisfy the NIST-PCTS. + */ + + if ( info->params.data_rate ) { + char_time = info->timeout/(32 * 5); + if (!char_time) + char_time++; + } else + char_time = 1; + + if (timeout) + char_time = MIN(char_time, timeout); + + if ( info->params.mode == MGSL_MODE_HDLC || + info->params.mode == MGSL_MODE_RAW ) { + while (info->tx_active) { + set_current_state(TASK_INTERRUPTIBLE); + schedule_timeout(char_time); + if (signal_pending(current)) + break; + if (timeout && ((orig_jiffies + timeout) < jiffies)) + break; + } + } else { + while (!(usc_InReg(info,TCSR) & TXSTATUS_ALL_SENT) && + info->tx_enabled) { + set_current_state(TASK_INTERRUPTIBLE); + schedule_timeout(char_time); + if (signal_pending(current)) + break; + if (timeout && ((orig_jiffies + timeout) < jiffies)) + break; + } + } + +exit: + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_wait_until_sent(%s) exit\n", + __FILE__,__LINE__, info->device_name ); + +} /* end of mgsl_wait_until_sent() */ + +/* mgsl_hangup() + * + * Called by tty_hangup() when a hangup is signaled. + * This is the same as to closing all open files for the port. + * + * Arguments: tty pointer to associated tty object + * Return Value: None + */ +static void mgsl_hangup(struct tty_struct *tty) +{ + struct mgsl_struct * info = (struct mgsl_struct *)tty->driver_data; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_hangup(%s)\n", + __FILE__,__LINE__, info->device_name ); + + if (mgsl_paranoia_check(info, tty->device, "mgsl_hangup")) + return; + + mgsl_flush_buffer(tty); + shutdown(info); + + info->count = 0; + info->flags &= ~(ASYNC_NORMAL_ACTIVE|ASYNC_CALLOUT_ACTIVE); + info->tty = 0; + + wake_up_interruptible(&info->open_wait); + +} /* end of mgsl_hangup() */ + +/* block_til_ready() + * + * Block the current process until the specified port + * is ready to be opened. + * + * Arguments: + * + * tty pointer to tty info structure + * filp pointer to open file object + * info pointer to device instance data + * + * Return Value: 0 if success, otherwise error code + */ +static int block_til_ready(struct tty_struct *tty, struct file * filp, + struct mgsl_struct *info) +{ + DECLARE_WAITQUEUE(wait, current); + int retval; + int do_clocal = 0, extra_count = 0; + unsigned long flags; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):block_til_ready on %s\n", + __FILE__,__LINE__, tty->driver.name ); + + if (tty->driver.subtype == SERIAL_TYPE_CALLOUT) { + /* this is a callout device */ + /* just verify that normal device is not in use */ + if (info->flags & ASYNC_NORMAL_ACTIVE) + return -EBUSY; + if ((info->flags & ASYNC_CALLOUT_ACTIVE) && + (info->flags & ASYNC_SESSION_LOCKOUT) && + (info->session != current->session)) + return -EBUSY; + if ((info->flags & ASYNC_CALLOUT_ACTIVE) && + (info->flags & ASYNC_PGRP_LOCKOUT) && + (info->pgrp != current->pgrp)) + return -EBUSY; + info->flags |= ASYNC_CALLOUT_ACTIVE; + return 0; + } + + if (filp->f_flags & O_NONBLOCK || tty->flags & (1 << TTY_IO_ERROR)){ + /* nonblock mode is set or port is not enabled */ + /* just verify that callout device is not active */ + if (info->flags & ASYNC_CALLOUT_ACTIVE) + return -EBUSY; + info->flags |= ASYNC_NORMAL_ACTIVE; + return 0; + } + + if (info->flags & ASYNC_CALLOUT_ACTIVE) { + if (info->normal_termios.c_cflag & CLOCAL) + do_clocal = 1; + } else { + if (tty->termios->c_cflag & CLOCAL) + do_clocal = 1; + } + + /* Wait for carrier detect and the line to become + * free (i.e., not in use by the callout). While we are in + * this loop, info->count is dropped by one, so that + * mgsl_close() knows when to free things. We restore it upon + * exit, either normal or abnormal. + */ + + retval = 0; + add_wait_queue(&info->open_wait, &wait); + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):block_til_ready before block on %s count=%d\n", + __FILE__,__LINE__, tty->driver.name, info->count ); + + save_flags(flags); cli(); + if (!tty_hung_up_p(filp)) { + extra_count = 1; + info->count--; + } + restore_flags(flags); + info->blocked_open++; + + while (1) { + if (!(info->flags & ASYNC_CALLOUT_ACTIVE) && + (tty->termios->c_cflag & CBAUD)) { + spin_lock_irqsave(&info->irq_spinlock,flags); + info->serial_signals |= SerialSignal_RTS + SerialSignal_DTR; + usc_set_serial_signals(info); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + } + + set_current_state(TASK_INTERRUPTIBLE); + + if (tty_hung_up_p(filp) || !(info->flags & ASYNC_INITIALIZED)){ + retval = (info->flags & ASYNC_HUP_NOTIFY) ? + -EAGAIN : -ERESTARTSYS; + break; + } + + spin_lock_irqsave(&info->irq_spinlock,flags); + usc_get_serial_signals(info); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + if (!(info->flags & ASYNC_CALLOUT_ACTIVE) && + !(info->flags & ASYNC_CLOSING) && + (do_clocal || (info->serial_signals & SerialSignal_DCD)) ) { + break; + } + + if (signal_pending(current)) { + retval = -ERESTARTSYS; + break; + } + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):block_til_ready blocking on %s count=%d\n", + __FILE__,__LINE__, tty->driver.name, info->count ); + + schedule(); + } + + set_current_state(TASK_RUNNING); + remove_wait_queue(&info->open_wait, &wait); + + if (extra_count) + info->count++; + info->blocked_open--; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):block_til_ready after blocking on %s count=%d\n", + __FILE__,__LINE__, tty->driver.name, info->count ); + + if (!retval) + info->flags |= ASYNC_NORMAL_ACTIVE; + + return retval; + +} /* end of block_til_ready() */ + +/* mgsl_open() + * + * Called when a port is opened. Init and enable port. + * Perform serial-specific initialization for the tty structure. + * + * Arguments: tty pointer to tty info structure + * filp associated file pointer + * + * Return Value: 0 if success, otherwise error code + */ +static int mgsl_open(struct tty_struct *tty, struct file * filp) +{ + struct mgsl_struct *info; + int retval, line; + unsigned long page; + unsigned long flags; + + /* verify range of specified line number */ + line = MINOR(tty->device) - tty->driver.minor_start; + if ((line < 0) || (line >= mgsl_device_count)) { + printk("%s(%d):mgsl_open with illegal line #%d.\n", + __FILE__,__LINE__,line); + return -ENODEV; + } + + /* find the info structure for the specified line */ + info = mgsl_device_list; + while(info && info->line != line) + info = info->next_device; + if ( !info ){ + printk("%s(%d):Can't find specified device on open (line=%d)\n", + __FILE__,__LINE__,line); + return -ENODEV; + } + + tty->driver_data = info; + info->tty = tty; + if (mgsl_paranoia_check(info, tty->device, "mgsl_open")) + return -ENODEV; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_open(%s), old ref count = %d\n", + __FILE__,__LINE__,tty->driver.name, info->count); + + MOD_INC_USE_COUNT; + + /* If port is closing, signal caller to try again */ + if (tty_hung_up_p(filp) || info->flags & ASYNC_CLOSING){ + if (info->flags & ASYNC_CLOSING) + interruptible_sleep_on(&info->close_wait); + retval = ((info->flags & ASYNC_HUP_NOTIFY) ? + -EAGAIN : -ERESTARTSYS); + goto cleanup; + } + + if (!tmp_buf) { + page = get_free_page(GFP_KERNEL); + if (!page) { + retval = -ENOMEM; + goto cleanup; + } + if (tmp_buf) + free_page(page); + else + tmp_buf = (unsigned char *) page; + } + + info->tty->low_latency = (info->flags & ASYNC_LOW_LATENCY) ? 1 : 0; + + spin_lock_irqsave(&info->netlock, flags); + if (info->netcount) { + retval = -EBUSY; + spin_unlock_irqrestore(&info->netlock, flags); + goto cleanup; + } + info->count++; + spin_unlock_irqrestore(&info->netlock, flags); + + if (info->count == 1) { + /* 1st open on this device, init hardware */ + retval = startup(info); + if (retval < 0) + goto cleanup; + } + + retval = block_til_ready(tty, filp, info); + if (retval) { + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):block_til_ready(%s) returned %d\n", + __FILE__,__LINE__, info->device_name, retval); + goto cleanup; + } + + if ((info->count == 1) && + info->flags & ASYNC_SPLIT_TERMIOS) { + if (tty->driver.subtype == SERIAL_TYPE_NORMAL) + *tty->termios = info->normal_termios; + else + *tty->termios = info->callout_termios; + mgsl_change_params(info); + } + + info->session = current->session; + info->pgrp = current->pgrp; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_open(%s) success\n", + __FILE__,__LINE__, info->device_name); + retval = 0; + +cleanup: + if (retval) { + if(MOD_IN_USE) + MOD_DEC_USE_COUNT; + if(info->count) + info->count--; + } + + return retval; + +} /* end of mgsl_open() */ + +/* + * /proc fs routines.... + */ + +static inline int line_info(char *buf, struct mgsl_struct *info) +{ + char stat_buf[30]; + int ret; + unsigned long flags; + + if (info->bus_type == MGSL_BUS_TYPE_PCI) { + ret = sprintf(buf, "%s:PCI io:%04X irq:%d mem:%08X lcr:%08X", + info->device_name, info->io_base, info->irq_level, + info->phys_memory_base, info->phys_lcr_base); + } else { + ret = sprintf(buf, "%s:(E)ISA io:%04X irq:%d dma:%d", + info->device_name, info->io_base, + info->irq_level, info->dma_level); + } + + /* output current serial signal states */ + spin_lock_irqsave(&info->irq_spinlock,flags); + usc_get_serial_signals(info); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + stat_buf[0] = 0; + stat_buf[1] = 0; + if (info->serial_signals & SerialSignal_RTS) + strcat(stat_buf, "|RTS"); + if (info->serial_signals & SerialSignal_CTS) + strcat(stat_buf, "|CTS"); + if (info->serial_signals & SerialSignal_DTR) + strcat(stat_buf, "|DTR"); + if (info->serial_signals & SerialSignal_DSR) + strcat(stat_buf, "|DSR"); + if (info->serial_signals & SerialSignal_DCD) + strcat(stat_buf, "|CD"); + if (info->serial_signals & SerialSignal_RI) + strcat(stat_buf, "|RI"); + + if (info->params.mode == MGSL_MODE_HDLC || + info->params.mode == MGSL_MODE_RAW ) { + ret += sprintf(buf+ret, " HDLC txok:%d rxok:%d", + info->icount.txok, info->icount.rxok); + if (info->icount.txunder) + ret += sprintf(buf+ret, " txunder:%d", info->icount.txunder); + if (info->icount.txabort) + ret += sprintf(buf+ret, " txabort:%d", info->icount.txabort); + if (info->icount.rxshort) + ret += sprintf(buf+ret, " rxshort:%d", info->icount.rxshort); + if (info->icount.rxlong) + ret += sprintf(buf+ret, " rxlong:%d", info->icount.rxlong); + if (info->icount.rxover) + ret += sprintf(buf+ret, " rxover:%d", info->icount.rxover); + if (info->icount.rxcrc) + ret += sprintf(buf+ret, " rxlong:%d", info->icount.rxcrc); + } else { + ret += sprintf(buf+ret, " ASYNC tx:%d rx:%d", + info->icount.tx, info->icount.rx); + if (info->icount.frame) + ret += sprintf(buf+ret, " fe:%d", info->icount.frame); + if (info->icount.parity) + ret += sprintf(buf+ret, " pe:%d", info->icount.parity); + if (info->icount.brk) + ret += sprintf(buf+ret, " brk:%d", info->icount.brk); + if (info->icount.overrun) + ret += sprintf(buf+ret, " oe:%d", info->icount.overrun); + } + + /* Append serial signal status to end */ + ret += sprintf(buf+ret, " %s\n", stat_buf+1); + + ret += sprintf(buf+ret, "txactive=%d bh_req=%d bh_run=%d pending_bh=%x\n", + info->tx_active,info->bh_requested,info->bh_running, + info->pending_bh); + + spin_lock_irqsave(&info->irq_spinlock,flags); + { + u16 Tcsr = usc_InReg( info, TCSR ); + u16 Tdmr = usc_InDmaReg( info, TDMR ); + u16 Ticr = usc_InReg( info, TICR ); + u16 Rscr = usc_InReg( info, RCSR ); + u16 Rdmr = usc_InDmaReg( info, RDMR ); + u16 Ricr = usc_InReg( info, RICR ); + u16 Icr = usc_InReg( info, ICR ); + u16 Dccr = usc_InReg( info, DCCR ); + u16 Tmr = usc_InReg( info, TMR ); + u16 Tccr = usc_InReg( info, TCCR ); + u16 Ccar = inw( info->io_base + CCAR ); + ret += sprintf(buf+ret, "tcsr=%04X tdmr=%04X ticr=%04X rcsr=%04X rdmr=%04X\n" + "ricr=%04X icr =%04X dccr=%04X tmr=%04X tccr=%04X ccar=%04X\n", + Tcsr,Tdmr,Ticr,Rscr,Rdmr,Ricr,Icr,Dccr,Tmr,Tccr,Ccar ); + } + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + return ret; + +} /* end of line_info() */ + +/* mgsl_read_proc() + * + * Called to print information about devices + * + * Arguments: + * page page of memory to hold returned info + * start + * off + * count + * eof + * data + * + * Return Value: + */ +int mgsl_read_proc(char *page, char **start, off_t off, int count, + int *eof, void *data) +{ + int len = 0, l; + off_t begin = 0; + struct mgsl_struct *info; + + len += sprintf(page, "synclink driver:%s\n", driver_version); + + info = mgsl_device_list; + while( info ) { + l = line_info(page + len, info); + len += l; + if (len+begin > off+count) + goto done; + if (len+begin < off) { + begin += len; + len = 0; + } + info = info->next_device; + } + + *eof = 1; +done: + if (off >= len+begin) + return 0; + *start = page + (off-begin); + return ((count < begin+len-off) ? count : begin+len-off); + +} /* end of mgsl_read_proc() */ + +/* mgsl_allocate_dma_buffers() + * + * Allocate and format DMA buffers (ISA adapter) + * or format shared memory buffers (PCI adapter). + * + * Arguments: info pointer to device instance data + * Return Value: 0 if success, otherwise error + */ +int mgsl_allocate_dma_buffers(struct mgsl_struct *info) +{ + unsigned short BuffersPerFrame; + + info->last_mem_alloc = 0; + + /* Calculate the number of DMA buffers necessary to hold the */ + /* largest allowable frame size. Note: If the max frame size is */ + /* not an even multiple of the DMA buffer size then we need to */ + /* round the buffer count per frame up one. */ + + BuffersPerFrame = (unsigned short)(info->max_frame_size/DMABUFFERSIZE); + if ( info->max_frame_size % DMABUFFERSIZE ) + BuffersPerFrame++; + + if ( info->bus_type == MGSL_BUS_TYPE_PCI ) { + /* + * The PCI adapter has 256KBytes of shared memory to use. + * This is 64 4K pages. + * + * The first page is used for padding at this time so the + * buffer list does not begin at offset 0 of the PCI + * adapter's shared memory. + * + * The 2nd page is used for the buffer list. A 4K buffer + * list can hold 128 DMA_BUFFER structures at 32 bytes + * each. + * + * This leaves 62 (MAXDMABUFS) 4K pages. + * + * The next N pages are used for transmit frame(s). We + * reserve enough 4K page blocks to hold the required + * number of transmit dma buffers (num_tx_dma_buffers), + * each of MaxFrameSize size. + * + * Of the remaining pages (62-N), determine how many can + * be used to receive full MaxFrameSize inbound frames + */ + info->tx_buffer_count = info->num_tx_dma_buffers * BuffersPerFrame; + info->rx_buffer_count = MAXDMABUFS - info->tx_buffer_count; + } else { + /* Calculate the number of PAGE_SIZE buffers needed for */ + /* receive and transmit DMA buffers. */ + + + /* Calculate the number of DMA buffers necessary to */ + /* hold 7 max size receive frames and one max size transmit frame. */ + /* The receive buffer count is bumped by one so we avoid an */ + /* End of List condition if all receive buffers are used when */ + /* using linked list DMA buffers. */ + + info->tx_buffer_count = info->num_tx_dma_buffers * BuffersPerFrame; + info->rx_buffer_count = (BuffersPerFrame * MAXRXFRAMES) + 6; + + /* + * limit total TxBuffers & RxBuffers to 62 4K total + * (ala PCI Allocation) + */ + + if ( (info->tx_buffer_count + info->rx_buffer_count) > 62 ) + info->rx_buffer_count = 62 - info->tx_buffer_count; + + } + + if ( debug_level >= DEBUG_LEVEL_INFO ) + printk("%s(%d):Allocating %d TX and %d RX DMA buffers.\n", + __FILE__,__LINE__, info->tx_buffer_count,info->rx_buffer_count); + + if ( mgsl_alloc_buffer_list_memory( info ) < 0 || + mgsl_alloc_frame_memory(info, info->rx_buffer_list, info->rx_buffer_count) < 0 || + mgsl_alloc_frame_memory(info, info->tx_buffer_list, info->tx_buffer_count) < 0 || + mgsl_alloc_intermediate_rxbuffer_memory(info) < 0 || + mgsl_alloc_intermediate_txbuffer_memory(info) < 0 ) { + printk("%s(%d):Can't allocate DMA buffer memory\n",__FILE__,__LINE__); + return -ENOMEM; + } + + mgsl_reset_rx_dma_buffers( info ); + mgsl_reset_tx_dma_buffers( info ); + + return 0; + +} /* end of mgsl_allocate_dma_buffers() */ + +/* + * mgsl_alloc_buffer_list_memory() + * + * Allocate a common DMA buffer for use as the + * receive and transmit buffer lists. + * + * A buffer list is a set of buffer entries where each entry contains + * a pointer to an actual buffer and a pointer to the next buffer entry + * (plus some other info about the buffer). + * + * The buffer entries for a list are built to form a circular list so + * that when the entire list has been traversed you start back at the + * beginning. + * + * This function allocates memory for just the buffer entries. + * The links (pointer to next entry) are filled in with the physical + * address of the next entry so the adapter can navigate the list + * using bus master DMA. The pointers to the actual buffers are filled + * out later when the actual buffers are allocated. + * + * Arguments: info pointer to device instance data + * Return Value: 0 if success, otherwise error + */ +int mgsl_alloc_buffer_list_memory( struct mgsl_struct *info ) +{ + unsigned int i; + + if ( info->bus_type == MGSL_BUS_TYPE_PCI ) { + /* PCI adapter uses shared memory. */ + info->buffer_list = info->memory_base + info->last_mem_alloc; + info->buffer_list_phys = info->last_mem_alloc; + info->last_mem_alloc += BUFFERLISTSIZE; + } else { + /* ISA adapter uses system memory. */ + /* The buffer lists are allocated as a common buffer that both */ + /* the processor and adapter can access. This allows the driver to */ + /* inspect portions of the buffer while other portions are being */ + /* updated by the adapter using Bus Master DMA. */ + + info->buffer_list = kmalloc(BUFFERLISTSIZE, GFP_KERNEL | GFP_DMA); + if ( info->buffer_list == NULL ) + return -ENOMEM; + + info->buffer_list_phys = virt_to_bus(info->buffer_list); + } + + /* We got the memory for the buffer entry lists. */ + /* Initialize the memory block to all zeros. */ + memset( info->buffer_list, 0, BUFFERLISTSIZE ); + + /* Save virtual address pointers to the receive and */ + /* transmit buffer lists. (Receive 1st). These pointers will */ + /* be used by the processor to access the lists. */ + info->rx_buffer_list = (DMABUFFERENTRY *)info->buffer_list; + info->tx_buffer_list = (DMABUFFERENTRY *)info->buffer_list; + info->tx_buffer_list += info->rx_buffer_count; + + /* + * Build the links for the buffer entry lists such that + * two circular lists are built. (Transmit and Receive). + * + * Note: the links are physical addresses + * which are read by the adapter to determine the next + * buffer entry to use. + */ + + for ( i = 0; i < info->rx_buffer_count; i++ ) { + /* calculate and store physical address of this buffer entry */ + info->rx_buffer_list[i].phys_entry = + info->buffer_list_phys + (i * sizeof(DMABUFFERENTRY)); + + /* calculate and store physical address of */ + /* next entry in cirular list of entries */ + + info->rx_buffer_list[i].link = info->buffer_list_phys; + + if ( i < info->rx_buffer_count - 1 ) + info->rx_buffer_list[i].link += (i + 1) * sizeof(DMABUFFERENTRY); + } + + for ( i = 0; i < info->tx_buffer_count; i++ ) { + /* calculate and store physical address of this buffer entry */ + info->tx_buffer_list[i].phys_entry = info->buffer_list_phys + + ((info->rx_buffer_count + i) * sizeof(DMABUFFERENTRY)); + + /* calculate and store physical address of */ + /* next entry in cirular list of entries */ + + info->tx_buffer_list[i].link = info->buffer_list_phys + + info->rx_buffer_count * sizeof(DMABUFFERENTRY); + + if ( i < info->tx_buffer_count - 1 ) + info->tx_buffer_list[i].link += (i + 1) * sizeof(DMABUFFERENTRY); + } + + return 0; + +} /* end of mgsl_alloc_buffer_list_memory() */ + +/* Free DMA buffers allocated for use as the + * receive and transmit buffer lists. + * Warning: + * + * The data transfer buffers associated with the buffer list + * MUST be freed before freeing the buffer list itself because + * the buffer list contains the information necessary to free + * the individual buffers! + */ +void mgsl_free_buffer_list_memory( struct mgsl_struct *info ) +{ + if ( info->buffer_list && info->bus_type != MGSL_BUS_TYPE_PCI ) + kfree(info->buffer_list); + + info->buffer_list = NULL; + info->rx_buffer_list = NULL; + info->tx_buffer_list = NULL; + +} /* end of mgsl_free_buffer_list_memory() */ + +/* + * mgsl_alloc_frame_memory() + * + * Allocate the frame DMA buffers used by the specified buffer list. + * Each DMA buffer will be one memory page in size. This is necessary + * because memory can fragment enough that it may be impossible + * contiguous pages. + * + * Arguments: + * + * info pointer to device instance data + * BufferList pointer to list of buffer entries + * Buffercount count of buffer entries in buffer list + * + * Return Value: 0 if success, otherwise -ENOMEM + */ +int mgsl_alloc_frame_memory(struct mgsl_struct *info,DMABUFFERENTRY *BufferList,int Buffercount) +{ + int i; + unsigned long phys_addr; + + /* Allocate page sized buffers for the receive buffer list */ + + for ( i = 0; i < Buffercount; i++ ) { + if ( info->bus_type == MGSL_BUS_TYPE_PCI ) { + /* PCI adapter uses shared memory buffers. */ + BufferList[i].virt_addr = info->memory_base + info->last_mem_alloc; + phys_addr = info->last_mem_alloc; + info->last_mem_alloc += DMABUFFERSIZE; + } else { + /* ISA adapter uses system memory. */ + BufferList[i].virt_addr = + kmalloc(DMABUFFERSIZE, GFP_KERNEL | GFP_DMA); + if ( BufferList[i].virt_addr == NULL ) + return -ENOMEM; + phys_addr = virt_to_bus(BufferList[i].virt_addr); + } + BufferList[i].phys_addr = phys_addr; + } + + return 0; + +} /* end of mgsl_alloc_frame_memory() */ + +/* + * mgsl_free_frame_memory() + * + * Free the buffers associated with + * each buffer entry of a buffer list. + * + * Arguments: + * + * info pointer to device instance data + * BufferList pointer to list of buffer entries + * Buffercount count of buffer entries in buffer list + * + * Return Value: None + */ +void mgsl_free_frame_memory(struct mgsl_struct *info, DMABUFFERENTRY *BufferList, int Buffercount) +{ + int i; + + if ( BufferList ) { + for ( i = 0 ; i < Buffercount ; i++ ) { + if ( BufferList[i].virt_addr ) { + if ( info->bus_type != MGSL_BUS_TYPE_PCI ) + kfree(BufferList[i].virt_addr); + BufferList[i].virt_addr = NULL; + } + } + } + +} /* end of mgsl_free_frame_memory() */ + +/* mgsl_free_dma_buffers() + * + * Free DMA buffers + * + * Arguments: info pointer to device instance data + * Return Value: None + */ +void mgsl_free_dma_buffers( struct mgsl_struct *info ) +{ + mgsl_free_frame_memory( info, info->rx_buffer_list, info->rx_buffer_count ); + mgsl_free_frame_memory( info, info->tx_buffer_list, info->tx_buffer_count ); + mgsl_free_buffer_list_memory( info ); + +} /* end of mgsl_free_dma_buffers() */ + + +/* + * mgsl_alloc_intermediate_rxbuffer_memory() + * + * Allocate a buffer large enough to hold max_frame_size. This buffer + * is used to pass an assembled frame to the line discipline. + * + * Arguments: + * + * info pointer to device instance data + * + * Return Value: 0 if success, otherwise -ENOMEM + */ +int mgsl_alloc_intermediate_rxbuffer_memory(struct mgsl_struct *info) +{ + info->intermediate_rxbuffer = kmalloc(info->max_frame_size, GFP_KERNEL | GFP_DMA); + if ( info->intermediate_rxbuffer == NULL ) + return -ENOMEM; + + return 0; + +} /* end of mgsl_alloc_intermediate_rxbuffer_memory() */ + +/* + * mgsl_free_intermediate_rxbuffer_memory() + * + * + * Arguments: + * + * info pointer to device instance data + * + * Return Value: None + */ +void mgsl_free_intermediate_rxbuffer_memory(struct mgsl_struct *info) +{ + if ( info->intermediate_rxbuffer ) + kfree(info->intermediate_rxbuffer); + + info->intermediate_rxbuffer = NULL; + +} /* end of mgsl_free_intermediate_rxbuffer_memory() */ + +/* + * mgsl_alloc_intermediate_txbuffer_memory() + * + * Allocate intermdiate transmit buffer(s) large enough to hold max_frame_size. + * This buffer is used to load transmit frames into the adapter's dma transfer + * buffers when there is sufficient space. + * + * Arguments: + * + * info pointer to device instance data + * + * Return Value: 0 if success, otherwise -ENOMEM + */ +int mgsl_alloc_intermediate_txbuffer_memory(struct mgsl_struct *info) +{ + int i; + + if ( debug_level >= DEBUG_LEVEL_INFO ) + printk("%s %s(%d) allocating %d tx holding buffers\n", + info->device_name, __FILE__,__LINE__,info->num_tx_holding_buffers); + + memset(info->tx_holding_buffers,0,sizeof(info->tx_holding_buffers)); + + for ( i=0; inum_tx_holding_buffers; ++i) { + info->tx_holding_buffers[i].buffer = + kmalloc(info->max_frame_size, GFP_KERNEL); + if ( info->tx_holding_buffers[i].buffer == NULL ) + return -ENOMEM; + } + + return 0; + +} /* end of mgsl_alloc_intermediate_txbuffer_memory() */ + +/* + * mgsl_free_intermediate_txbuffer_memory() + * + * + * Arguments: + * + * info pointer to device instance data + * + * Return Value: None + */ +void mgsl_free_intermediate_txbuffer_memory(struct mgsl_struct *info) +{ + int i; + + for ( i=0; inum_tx_holding_buffers; ++i ) { + if ( info->tx_holding_buffers[i].buffer ) { + kfree(info->tx_holding_buffers[i].buffer); + info->tx_holding_buffers[i].buffer=NULL; + } + } + + info->get_tx_holding_index = 0; + info->put_tx_holding_index = 0; + info->tx_holding_count = 0; + +} /* end of mgsl_free_intermediate_txbuffer_memory() */ + + +/* + * load_next_tx_holding_buffer() + * + * attempts to load the next buffered tx request into the + * tx dma buffers + * + * Arguments: + * + * info pointer to device instance data + * + * Return Value: 1 if next buffered tx request loaded + * into adapter's tx dma buffer, + * 0 otherwise + */ +int load_next_tx_holding_buffer(struct mgsl_struct *info) +{ + int ret = 0; + + if ( info->tx_holding_count ) { + /* determine if we have enough tx dma buffers + * to accomodate the next tx frame + */ + struct tx_holding_buffer *ptx = + &info->tx_holding_buffers[info->get_tx_holding_index]; + int num_free = num_free_tx_dma_buffers(info); + int num_needed = ptx->buffer_size / DMABUFFERSIZE; + if ( ptx->buffer_size % DMABUFFERSIZE ) + ++num_needed; + + if (num_needed <= num_free) { + info->xmit_cnt = ptx->buffer_size; + mgsl_load_tx_dma_buffer(info,ptx->buffer,ptx->buffer_size); + + --info->tx_holding_count; + if ( ++info->get_tx_holding_index >= info->num_tx_holding_buffers) + info->get_tx_holding_index=0; + + /* restart transmit timer */ + del_timer(&info->tx_timer); + info->tx_timer.expires = jiffies + jiffies_from_ms(5000); + add_timer(&info->tx_timer); + + ret = 1; + } + } + + return ret; +} + +/* + * save_tx_buffer_request() + * + * attempt to store transmit frame request for later transmission + * + * Arguments: + * + * info pointer to device instance data + * Buffer pointer to buffer containing frame to load + * BufferSize size in bytes of frame in Buffer + * + * Return Value: 1 if able to store, 0 otherwise + */ +int save_tx_buffer_request(struct mgsl_struct *info,const char *Buffer, unsigned int BufferSize) +{ + struct tx_holding_buffer *ptx; + + if ( info->tx_holding_count >= info->num_tx_holding_buffers ) { + return 0; /* all buffers in use */ + } + + ptx = &info->tx_holding_buffers[info->put_tx_holding_index]; + ptx->buffer_size = BufferSize; + memcpy( ptx->buffer, Buffer, BufferSize); + + ++info->tx_holding_count; + if ( ++info->put_tx_holding_index >= info->num_tx_holding_buffers) + info->put_tx_holding_index=0; + + return 1; +} + +int mgsl_claim_resources(struct mgsl_struct *info) +{ + if (request_region(info->io_base,info->io_addr_size,"synclink") == NULL) { + printk( "%s(%d):I/O address conflict on device %s Addr=%08X\n", + __FILE__,__LINE__,info->device_name, info->io_base); + return -ENODEV; + } + info->io_addr_requested = 1; + + if ( request_irq(info->irq_level,mgsl_interrupt,info->irq_flags, + info->device_name, info ) < 0 ) { + printk( "%s(%d):Cant request interrupt on device %s IRQ=%d\n", + __FILE__,__LINE__,info->device_name, info->irq_level ); + goto errout; + } + info->irq_requested = 1; + + if ( info->bus_type == MGSL_BUS_TYPE_PCI ) { + if (request_mem_region(info->phys_memory_base,0x40000,"synclink") == NULL) { + printk( "%s(%d):mem addr conflict device %s Addr=%08X\n", + __FILE__,__LINE__,info->device_name, info->phys_memory_base); + goto errout; + } + info->shared_mem_requested = 1; + if (request_mem_region(info->phys_lcr_base + info->lcr_offset,128,"synclink") == NULL) { + printk( "%s(%d):lcr mem addr conflict device %s Addr=%08X\n", + __FILE__,__LINE__,info->device_name, info->phys_lcr_base + info->lcr_offset); + goto errout; + } + info->lcr_mem_requested = 1; + + info->memory_base = ioremap(info->phys_memory_base,0x40000); + if (!info->memory_base) { + printk( "%s(%d):Cant map shared memory on device %s MemAddr=%08X\n", + __FILE__,__LINE__,info->device_name, info->phys_memory_base ); + goto errout; + } + + if ( !mgsl_memory_test(info) ) { + printk( "%s(%d):Failed shared memory test %s MemAddr=%08X\n", + __FILE__,__LINE__,info->device_name, info->phys_memory_base ); + goto errout; + } + + info->lcr_base = ioremap(info->phys_lcr_base,PAGE_SIZE) + info->lcr_offset; + if (!info->lcr_base) { + printk( "%s(%d):Cant map LCR memory on device %s MemAddr=%08X\n", + __FILE__,__LINE__,info->device_name, info->phys_lcr_base ); + goto errout; + } + + } else { + /* claim DMA channel */ + + if (request_dma(info->dma_level,info->device_name) < 0){ + printk( "%s(%d):Cant request DMA channel on device %s DMA=%d\n", + __FILE__,__LINE__,info->device_name, info->dma_level ); + mgsl_release_resources( info ); + return -ENODEV; + } + info->dma_requested = 1; + + /* ISA adapter uses bus master DMA */ + set_dma_mode(info->dma_level,DMA_MODE_CASCADE); + enable_dma(info->dma_level); + } + + if ( mgsl_allocate_dma_buffers(info) < 0 ) { + printk( "%s(%d):Cant allocate DMA buffers on device %s DMA=%d\n", + __FILE__,__LINE__,info->device_name, info->dma_level ); + goto errout; + } + + return 0; +errout: + mgsl_release_resources(info); + return -ENODEV; + +} /* end of mgsl_claim_resources() */ + +void mgsl_release_resources(struct mgsl_struct *info) +{ + if ( debug_level >= DEBUG_LEVEL_INFO ) + printk( "%s(%d):mgsl_release_resources(%s) entry\n", + __FILE__,__LINE__,info->device_name ); + + if ( info->irq_requested ) { + free_irq(info->irq_level, info); + info->irq_requested = 0; + } + if ( info->dma_requested ) { + disable_dma(info->dma_level); + free_dma(info->dma_level); + info->dma_requested = 0; + } + mgsl_free_dma_buffers(info); + mgsl_free_intermediate_rxbuffer_memory(info); + mgsl_free_intermediate_txbuffer_memory(info); + + if ( info->io_addr_requested ) { + release_region(info->io_base,info->io_addr_size); + info->io_addr_requested = 0; + } + if ( info->shared_mem_requested ) { + release_mem_region(info->phys_memory_base,0x40000); + info->shared_mem_requested = 0; + } + if ( info->lcr_mem_requested ) { + release_mem_region(info->phys_lcr_base + info->lcr_offset,128); + info->lcr_mem_requested = 0; + } + if (info->memory_base){ + iounmap(info->memory_base); + info->memory_base = 0; + } + if (info->lcr_base){ + iounmap(info->lcr_base - info->lcr_offset); + info->lcr_base = 0; + } + + if ( debug_level >= DEBUG_LEVEL_INFO ) + printk( "%s(%d):mgsl_release_resources(%s) exit\n", + __FILE__,__LINE__,info->device_name ); + +} /* end of mgsl_release_resources() */ + +/* mgsl_add_device() + * + * Add the specified device instance data structure to the + * global linked list of devices and increment the device count. + * + * Arguments: info pointer to device instance data + * Return Value: None + */ +void mgsl_add_device( struct mgsl_struct *info ) +{ + info->next_device = NULL; + info->line = mgsl_device_count; + sprintf(info->device_name,"ttySL%d",info->line); + + if (info->line < MAX_TOTAL_DEVICES) { + if (maxframe[info->line]) + info->max_frame_size = maxframe[info->line]; + info->dosyncppp = dosyncppp[info->line]; + + if (txdmabufs[info->line]) { + info->num_tx_dma_buffers = txdmabufs[info->line]; + if (info->num_tx_dma_buffers < 1) + info->num_tx_dma_buffers = 1; + } + + if (txholdbufs[info->line]) { + info->num_tx_holding_buffers = txholdbufs[info->line]; + if (info->num_tx_holding_buffers < 1) + info->num_tx_holding_buffers = 1; + else if (info->num_tx_holding_buffers > MAX_TX_HOLDING_BUFFERS) + info->num_tx_holding_buffers = MAX_TX_HOLDING_BUFFERS; + } + } + + mgsl_device_count++; + + if ( !mgsl_device_list ) + mgsl_device_list = info; + else { + struct mgsl_struct *current_dev = mgsl_device_list; + while( current_dev->next_device ) + current_dev = current_dev->next_device; + current_dev->next_device = info; + } + + if ( info->max_frame_size < 4096 ) + info->max_frame_size = 4096; + else if ( info->max_frame_size > 65535 ) + info->max_frame_size = 65535; + + if ( info->bus_type == MGSL_BUS_TYPE_PCI ) { + printk( "SyncLink device %s added:PCI bus IO=%04X IRQ=%d Mem=%08X LCR=%08X MaxFrameSize=%u\n", + info->device_name, info->io_base, info->irq_level, + info->phys_memory_base, info->phys_lcr_base, + info->max_frame_size ); + } else { + printk( "SyncLink device %s added:ISA bus IO=%04X IRQ=%d DMA=%d MaxFrameSize=%u\n", + info->device_name, info->io_base, info->irq_level, info->dma_level, + info->max_frame_size ); + } + +#ifdef CONFIG_SYNCLINK_SYNCPPP +#ifdef MODULE + if (info->dosyncppp) +#endif + mgsl_sppp_init(info); +#endif +} /* end of mgsl_add_device() */ + +/* mgsl_allocate_device() + * + * Allocate and initialize a device instance structure + * + * Arguments: none + * Return Value: pointer to mgsl_struct if success, otherwise NULL + */ +struct mgsl_struct* mgsl_allocate_device() +{ + struct mgsl_struct *info; + + info = (struct mgsl_struct *)kmalloc(sizeof(struct mgsl_struct), + GFP_KERNEL); + + if (!info) { + printk("Error can't allocate device instance data\n"); + } else { + memset(info, 0, sizeof(struct mgsl_struct)); + info->magic = MGSL_MAGIC; + info->task.sync = 0; + info->task.routine = mgsl_bh_handler; + info->task.data = info; + info->max_frame_size = 4096; + info->close_delay = 5*HZ/10; + info->closing_wait = 30*HZ; + init_waitqueue_head(&info->open_wait); + init_waitqueue_head(&info->close_wait); + init_waitqueue_head(&info->status_event_wait_q); + init_waitqueue_head(&info->event_wait_q); + spin_lock_init(&info->irq_spinlock); + spin_lock_init(&info->netlock); + memcpy(&info->params,&default_params,sizeof(MGSL_PARAMS)); + info->idle_mode = HDLC_TXIDLE_FLAGS; + info->num_tx_dma_buffers = 1; + info->num_tx_holding_buffers = 0; + } + + return info; + +} /* end of mgsl_allocate_device()*/ + +/* + * perform tty device initialization + */ +int mgsl_init_tty(void); +int mgsl_init_tty() +{ + struct mgsl_struct *info; + + memset(serial_table,0,sizeof(struct tty_struct*)*MAX_TOTAL_DEVICES); + memset(serial_termios,0,sizeof(struct termios*)*MAX_TOTAL_DEVICES); + memset(serial_termios_locked,0,sizeof(struct termios*)*MAX_TOTAL_DEVICES); + + /* Initialize the tty_driver structure */ + + memset(&serial_driver, 0, sizeof(struct tty_driver)); + serial_driver.magic = TTY_DRIVER_MAGIC; + serial_driver.driver_name = "synclink"; + serial_driver.name = "ttySL"; + serial_driver.major = ttymajor; + serial_driver.minor_start = 64; + serial_driver.num = mgsl_device_count; + serial_driver.type = TTY_DRIVER_TYPE_SERIAL; + serial_driver.subtype = SERIAL_TYPE_NORMAL; + serial_driver.init_termios = tty_std_termios; + serial_driver.init_termios.c_cflag = + B9600 | CS8 | CREAD | HUPCL | CLOCAL; + serial_driver.flags = TTY_DRIVER_REAL_RAW; + serial_driver.refcount = &serial_refcount; + serial_driver.table = serial_table; + serial_driver.termios = serial_termios; + serial_driver.termios_locked = serial_termios_locked; + + serial_driver.open = mgsl_open; + serial_driver.close = mgsl_close; + serial_driver.write = mgsl_write; + serial_driver.put_char = mgsl_put_char; + serial_driver.flush_chars = mgsl_flush_chars; + serial_driver.write_room = mgsl_write_room; + serial_driver.chars_in_buffer = mgsl_chars_in_buffer; + serial_driver.flush_buffer = mgsl_flush_buffer; + serial_driver.ioctl = mgsl_ioctl; + serial_driver.throttle = mgsl_throttle; + serial_driver.unthrottle = mgsl_unthrottle; + serial_driver.send_xchar = mgsl_send_xchar; + serial_driver.break_ctl = mgsl_break; + serial_driver.wait_until_sent = mgsl_wait_until_sent; + serial_driver.read_proc = mgsl_read_proc; + serial_driver.set_termios = mgsl_set_termios; + serial_driver.stop = mgsl_stop; + serial_driver.start = mgsl_start; + serial_driver.hangup = mgsl_hangup; + + /* + * The callout device is just like normal device except for + * major number and the subtype code. + */ + callout_driver = serial_driver; + callout_driver.name = "cuaSL"; + callout_driver.major = cuamajor; + callout_driver.subtype = SERIAL_TYPE_CALLOUT; + callout_driver.read_proc = 0; + callout_driver.proc_entry = 0; + + if (tty_register_driver(&serial_driver) < 0) + printk("%s(%d):Couldn't register serial driver\n", + __FILE__,__LINE__); + + if (tty_register_driver(&callout_driver) < 0) + printk("%s(%d):Couldn't register callout driver\n", + __FILE__,__LINE__); + + printk("%s %s, tty major#%d callout major#%d\n", + driver_name, driver_version, + serial_driver.major, callout_driver.major); + + /* Propagate these values to all device instances */ + + info = mgsl_device_list; + while(info){ + info->callout_termios = callout_driver.init_termios; + info->normal_termios = serial_driver.init_termios; + info = info->next_device; + } + + return 0; +} + +/* enumerate user specified ISA adapters + */ +int mgsl_enum_isa_devices() +{ + struct mgsl_struct *info; + int i; + + /* Check for user specified ISA devices */ + + for (i=0 ;(i < MAX_ISA_DEVICES) && io[i] && irq[i]; i++){ + if ( debug_level >= DEBUG_LEVEL_INFO ) + printk("ISA device specified io=%04X,irq=%d,dma=%d\n", + io[i], irq[i], dma[i] ); + + info = mgsl_allocate_device(); + if ( !info ) { + /* error allocating device instance data */ + if ( debug_level >= DEBUG_LEVEL_ERROR ) + printk( "can't allocate device instance data.\n"); + continue; + } + + /* Copy user configuration info to device instance data */ + info->io_base = (unsigned int)io[i]; + info->irq_level = (unsigned int)irq[i]; + info->irq_level = irq_cannonicalize(info->irq_level); + info->dma_level = (unsigned int)dma[i]; + info->bus_type = MGSL_BUS_TYPE_ISA; + info->io_addr_size = 16; + info->irq_flags = 0; + + mgsl_add_device( info ); + } + + return 0; +} + +/* mgsl_init() + * + * Driver initialization entry point. + * + * Arguments: None + * Return Value: 0 if success, otherwise error code + */ +int __init mgsl_init(void) +{ + int rc; + + EXPORT_NO_SYMBOLS; + + printk("%s %s\n", driver_name, driver_version); + + mgsl_enum_isa_devices(); + pci_register_driver(&synclink_pci_driver); + + if ( !mgsl_device_list ) { + printk("%s(%d):No SyncLink devices found.\n",__FILE__,__LINE__); + return -ENODEV; + } + if ((rc = mgsl_init_tty())) + return rc; + + return 0; +} + +static int __init synclink_init(void) +{ +/* Uncomment this to kernel debug module. + * mgsl_get_text_ptr() leaves the .text address in eax + * which can be used with add-symbol-file with gdb. + */ + if (break_on_load) { + mgsl_get_text_ptr(); + BREAKPOINT(); + } + + return mgsl_init(); +} + +static void __exit synclink_exit(void) +{ + unsigned long flags; + int rc; + struct mgsl_struct *info; + struct mgsl_struct *tmp; + + printk("Unloading %s: %s\n", driver_name, driver_version); + save_flags(flags); + cli(); + if ((rc = tty_unregister_driver(&serial_driver))) + printk("%s(%d) failed to unregister tty driver err=%d\n", + __FILE__,__LINE__,rc); + if ((rc = tty_unregister_driver(&callout_driver))) + printk("%s(%d) failed to unregister callout driver err=%d\n", + __FILE__,__LINE__,rc); + restore_flags(flags); + + info = mgsl_device_list; + while(info) { +#ifdef CONFIG_SYNCLINK_SYNCPPP + if (info->dosyncppp) + mgsl_sppp_delete(info); +#endif + mgsl_release_resources(info); + tmp = info; + info = info->next_device; + kfree(tmp); + } + + if (tmp_buf) { + free_page((unsigned long) tmp_buf); + tmp_buf = NULL; + } + + pci_unregister_driver(&synclink_pci_driver); +} + +module_init(synclink_init); +module_exit(synclink_exit); + +/* + * usc_RTCmd() + * + * Issue a USC Receive/Transmit command to the + * Channel Command/Address Register (CCAR). + * + * Notes: + * + * The command is encoded in the most significant 5 bits <15..11> + * of the CCAR value. Bits <10..7> of the CCAR must be preserved + * and Bits <6..0> must be written as zeros. + * + * Arguments: + * + * info pointer to device information structure + * Cmd command mask (use symbolic macros) + * + * Return Value: + * + * None + */ +void usc_RTCmd( struct mgsl_struct *info, u16 Cmd ) +{ + /* output command to CCAR in bits <15..11> */ + /* preserve bits <10..7>, bits <6..0> must be zero */ + + outw( Cmd + info->loopback_bits, info->io_base + CCAR ); + + /* Read to flush write to CCAR */ + if ( info->bus_type == MGSL_BUS_TYPE_PCI ) + inw( info->io_base + CCAR ); + +} /* end of usc_RTCmd() */ + +/* + * usc_DmaCmd() + * + * Issue a DMA command to the DMA Command/Address Register (DCAR). + * + * Arguments: + * + * info pointer to device information structure + * Cmd DMA command mask (usc_DmaCmd_XX Macros) + * + * Return Value: + * + * None + */ +void usc_DmaCmd( struct mgsl_struct *info, u16 Cmd ) +{ + /* write command mask to DCAR */ + outw( Cmd + info->mbre_bit, info->io_base ); + + /* Read to flush write to DCAR */ + if ( info->bus_type == MGSL_BUS_TYPE_PCI ) + inw( info->io_base ); + +} /* end of usc_DmaCmd() */ + +/* + * usc_OutDmaReg() + * + * Write a 16-bit value to a USC DMA register + * + * Arguments: + * + * info pointer to device info structure + * RegAddr register address (number) for write + * RegValue 16-bit value to write to register + * + * Return Value: + * + * None + * + */ +void usc_OutDmaReg( struct mgsl_struct *info, u16 RegAddr, u16 RegValue ) +{ + /* Note: The DCAR is located at the adapter base address */ + /* Note: must preserve state of BIT8 in DCAR */ + + outw( RegAddr + info->mbre_bit, info->io_base ); + outw( RegValue, info->io_base ); + + /* Read to flush write to DCAR */ + if ( info->bus_type == MGSL_BUS_TYPE_PCI ) + inw( info->io_base ); + +} /* end of usc_OutDmaReg() */ + +/* + * usc_InDmaReg() + * + * Read a 16-bit value from a DMA register + * + * Arguments: + * + * info pointer to device info structure + * RegAddr register address (number) to read from + * + * Return Value: + * + * The 16-bit value read from register + * + */ +u16 usc_InDmaReg( struct mgsl_struct *info, u16 RegAddr ) +{ + /* Note: The DCAR is located at the adapter base address */ + /* Note: must preserve state of BIT8 in DCAR */ + + outw( RegAddr + info->mbre_bit, info->io_base ); + return inw( info->io_base ); + +} /* end of usc_InDmaReg() */ + +/* + * + * usc_OutReg() + * + * Write a 16-bit value to a USC serial channel register + * + * Arguments: + * + * info pointer to device info structure + * RegAddr register address (number) to write to + * RegValue 16-bit value to write to register + * + * Return Value: + * + * None + * + */ +void usc_OutReg( struct mgsl_struct *info, u16 RegAddr, u16 RegValue ) +{ + outw( RegAddr + info->loopback_bits, info->io_base + CCAR ); + outw( RegValue, info->io_base + CCAR ); + + /* Read to flush write to CCAR */ + if ( info->bus_type == MGSL_BUS_TYPE_PCI ) + inw( info->io_base + CCAR ); + +} /* end of usc_OutReg() */ + +/* + * usc_InReg() + * + * Reads a 16-bit value from a USC serial channel register + * + * Arguments: + * + * info pointer to device extension + * RegAddr register address (number) to read from + * + * Return Value: + * + * 16-bit value read from register + */ +u16 usc_InReg( struct mgsl_struct *info, u16 RegAddr ) +{ + outw( RegAddr + info->loopback_bits, info->io_base + CCAR ); + return inw( info->io_base + CCAR ); + +} /* end of usc_InReg() */ + +/* usc_set_sdlc_mode() + * + * Set up the adapter for SDLC DMA communications. + * + * Arguments: info pointer to device instance data + * Return Value: NONE + */ +void usc_set_sdlc_mode( struct mgsl_struct *info ) +{ + u16 RegValue; + int PreSL1660; + + /* + * determine if the IUSC on the adapter is pre-SL1660. If + * not, take advantage of the UnderWait feature of more + * modern chips. If an underrun occurs and this bit is set, + * the transmitter will idle the programmed idle pattern + * until the driver has time to service the underrun. Otherwise, + * the dma controller may get the cycles previously requested + * and begin transmitting queued tx data. + */ + usc_OutReg(info,TMCR,0x1f); + RegValue=usc_InReg(info,TMDR); + if ( RegValue == IUSC_PRE_SL1660 ) + PreSL1660 = 1; + else + PreSL1660 = 0; + + + if ( info->params.flags & HDLC_FLAG_HDLC_LOOPMODE ) + { + /* + ** Channel Mode Register (CMR) + ** + ** <15..14> 10 Tx Sub Modes, Send Flag on Underrun + ** <13> 0 0 = Transmit Disabled (initially) + ** <12> 0 1 = Consecutive Idles share common 0 + ** <11..8> 1110 Transmitter Mode = HDLC/SDLC Loop + ** <7..4> 0000 Rx Sub Modes, addr/ctrl field handling + ** <3..0> 0110 Receiver Mode = HDLC/SDLC + ** + ** 1000 1110 0000 0110 = 0x8e06 + */ + RegValue = 0x8e06; + + /*-------------------------------------------------- + * ignore user options for UnderRun Actions and + * preambles + *--------------------------------------------------*/ + } + else + { + /* Channel mode Register (CMR) + * + * <15..14> 00 Tx Sub modes, Underrun Action + * <13> 0 1 = Send Preamble before opening flag + * <12> 0 1 = Consecutive Idles share common 0 + * <11..8> 0110 Transmitter mode = HDLC/SDLC + * <7..4> 0000 Rx Sub modes, addr/ctrl field handling + * <3..0> 0110 Receiver mode = HDLC/SDLC + * + * 0000 0110 0000 0110 = 0x0606 + */ + if (info->params.mode == MGSL_MODE_RAW) { + RegValue = 0x0001; /* Set Receive mode = external sync */ + + usc_OutReg( info, IOCR, /* Set IOCR DCD is RxSync Detect Input */ + (unsigned short)((usc_InReg(info, IOCR) & ~(BIT13|BIT12)) | BIT12)); + + /* + * TxSubMode: + * CMR <15> 0 Don't send CRC on Tx Underrun + * CMR <14> x undefined + * CMR <13> 0 Send preamble before openning sync + * CMR <12> 0 Send 8-bit syncs, 1=send Syncs per TxLength + * + * TxMode: + * CMR <11-8) 0100 MonoSync + * + * 0x00 0100 xxxx xxxx 04xx + */ + RegValue |= 0x0400; + } + else { + + RegValue = 0x0606; + + if ( info->params.flags & HDLC_FLAG_UNDERRUN_ABORT15 ) + RegValue |= BIT14; + else if ( info->params.flags & HDLC_FLAG_UNDERRUN_FLAG ) + RegValue |= BIT15; + else if ( info->params.flags & HDLC_FLAG_UNDERRUN_CRC ) + RegValue |= BIT15 + BIT14; + } + + if ( info->params.preamble != HDLC_PREAMBLE_PATTERN_NONE ) + RegValue |= BIT13; + } + + if ( info->params.mode == MGSL_MODE_HDLC && + (info->params.flags & HDLC_FLAG_SHARE_ZERO) ) + RegValue |= BIT12; + + if ( info->params.addr_filter != 0xff ) + { + /* set up receive address filtering */ + usc_OutReg( info, RSR, info->params.addr_filter ); + RegValue |= BIT4; + } + + usc_OutReg( info, CMR, RegValue ); + info->cmr_value = RegValue; + + /* Receiver mode Register (RMR) + * + * <15..13> 000 encoding + * <12..11> 00 FCS = 16bit CRC CCITT (x15 + x12 + x5 + 1) + * <10> 1 1 = Set CRC to all 1s (use for SDLC/HDLC) + * <9> 0 1 = Include Receive chars in CRC + * <8> 1 1 = Use Abort/PE bit as abort indicator + * <7..6> 00 Even parity + * <5> 0 parity disabled + * <4..2> 000 Receive Char Length = 8 bits + * <1..0> 00 Disable Receiver + * + * 0000 0101 0000 0000 = 0x0500 + */ + + RegValue = 0x0500; + + switch ( info->params.encoding ) { + case HDLC_ENCODING_NRZB: RegValue |= BIT13; break; + case HDLC_ENCODING_NRZI_MARK: RegValue |= BIT14; break; + case HDLC_ENCODING_NRZI_SPACE: RegValue |= BIT14 + BIT13; break; + case HDLC_ENCODING_BIPHASE_MARK: RegValue |= BIT15; break; + case HDLC_ENCODING_BIPHASE_SPACE: RegValue |= BIT15 + BIT13; break; + case HDLC_ENCODING_BIPHASE_LEVEL: RegValue |= BIT15 + BIT14; break; + case HDLC_ENCODING_DIFF_BIPHASE_LEVEL: RegValue |= BIT15 + BIT14 + BIT13; break; + } + + if ( (info->params.crc_type & HDLC_CRC_MASK) == HDLC_CRC_16_CCITT ) + RegValue |= BIT9; + else if ( (info->params.crc_type & HDLC_CRC_MASK) == HDLC_CRC_32_CCITT ) + RegValue |= ( BIT12 | BIT10 | BIT9 ); + + usc_OutReg( info, RMR, RegValue ); + + /* Set the Receive count Limit Register (RCLR) to 0xffff. */ + /* When an opening flag of an SDLC frame is recognized the */ + /* Receive Character count (RCC) is loaded with the value in */ + /* RCLR. The RCC is decremented for each received byte. The */ + /* value of RCC is stored after the closing flag of the frame */ + /* allowing the frame size to be computed. */ + + usc_OutReg( info, RCLR, RCLRVALUE ); + + usc_RCmd( info, RCmd_SelectRicrdma_level ); + + /* Receive Interrupt Control Register (RICR) + * + * <15..8> ? RxFIFO DMA Request Level + * <7> 0 Exited Hunt IA (Interrupt Arm) + * <6> 0 Idle Received IA + * <5> 0 Break/Abort IA + * <4> 0 Rx Bound IA + * <3> 1 Queued status reflects oldest 2 bytes in FIFO + * <2> 0 Abort/PE IA + * <1> 1 Rx Overrun IA + * <0> 0 Select TC0 value for readback + * + * 0000 0000 0000 1000 = 0x000a + */ + + /* Carry over the Exit Hunt and Idle Received bits */ + /* in case they have been armed by usc_ArmEvents. */ + + RegValue = usc_InReg( info, RICR ) & 0xc0; + + if ( info->bus_type == MGSL_BUS_TYPE_PCI ) + usc_OutReg( info, RICR, (u16)(0x030a | RegValue) ); + else + usc_OutReg( info, RICR, (u16)(0x140a | RegValue) ); + + /* Unlatch all Rx status bits and clear Rx status IRQ Pending */ + + usc_UnlatchRxstatusBits( info, RXSTATUS_ALL ); + usc_ClearIrqPendingBits( info, RECEIVE_STATUS ); + + /* Transmit mode Register (TMR) + * + * <15..13> 000 encoding + * <12..11> 00 FCS = 16bit CRC CCITT (x15 + x12 + x5 + 1) + * <10> 1 1 = Start CRC as all 1s (use for SDLC/HDLC) + * <9> 0 1 = Tx CRC Enabled + * <8> 0 1 = Append CRC to end of transmit frame + * <7..6> 00 Transmit parity Even + * <5> 0 Transmit parity Disabled + * <4..2> 000 Tx Char Length = 8 bits + * <1..0> 00 Disable Transmitter + * + * 0000 0100 0000 0000 = 0x0400 + */ + + RegValue = 0x0400; + + switch ( info->params.encoding ) { + case HDLC_ENCODING_NRZB: RegValue |= BIT13; break; + case HDLC_ENCODING_NRZI_MARK: RegValue |= BIT14; break; + case HDLC_ENCODING_NRZI_SPACE: RegValue |= BIT14 + BIT13; break; + case HDLC_ENCODING_BIPHASE_MARK: RegValue |= BIT15; break; + case HDLC_ENCODING_BIPHASE_SPACE: RegValue |= BIT15 + BIT13; break; + case HDLC_ENCODING_BIPHASE_LEVEL: RegValue |= BIT15 + BIT14; break; + case HDLC_ENCODING_DIFF_BIPHASE_LEVEL: RegValue |= BIT15 + BIT14 + BIT13; break; + } + + if ( (info->params.crc_type & HDLC_CRC_MASK) == HDLC_CRC_16_CCITT ) + RegValue |= BIT9 + BIT8; + else if ( (info->params.crc_type & HDLC_CRC_MASK) == HDLC_CRC_32_CCITT ) + RegValue |= ( BIT12 | BIT10 | BIT9 | BIT8); + + usc_OutReg( info, TMR, RegValue ); + + usc_set_txidle( info ); + + + usc_TCmd( info, TCmd_SelectTicrdma_level ); + + /* Transmit Interrupt Control Register (TICR) + * + * <15..8> ? Transmit FIFO DMA Level + * <7> 0 Present IA (Interrupt Arm) + * <6> 0 Idle Sent IA + * <5> 1 Abort Sent IA + * <4> 1 EOF/EOM Sent IA + * <3> 0 CRC Sent IA + * <2> 1 1 = Wait for SW Trigger to Start Frame + * <1> 1 Tx Underrun IA + * <0> 0 TC0 constant on read back + * + * 0000 0000 0011 0110 = 0x0036 + */ + + if ( info->bus_type == MGSL_BUS_TYPE_PCI ) + usc_OutReg( info, TICR, 0x0736 ); + else + usc_OutReg( info, TICR, 0x1436 ); + + usc_UnlatchTxstatusBits( info, TXSTATUS_ALL ); + usc_ClearIrqPendingBits( info, TRANSMIT_STATUS ); + + /* + ** Transmit Command/Status Register (TCSR) + ** + ** <15..12> 0000 TCmd + ** <11> 0/1 UnderWait + ** <10..08> 000 TxIdle + ** <7> x PreSent + ** <6> x IdleSent + ** <5> x AbortSent + ** <4> x EOF/EOM Sent + ** <3> x CRC Sent + ** <2> x All Sent + ** <1> x TxUnder + ** <0> x TxEmpty + ** + ** 0000 0000 0000 0000 = 0x0000 + */ + info->tcsr_value = 0; + + if ( !PreSL1660 ) + info->tcsr_value |= TCSR_UNDERWAIT; + + usc_OutReg( info, TCSR, info->tcsr_value ); + + /* Clock mode Control Register (CMCR) + * + * <15..14> 00 counter 1 Source = Disabled + * <13..12> 00 counter 0 Source = Disabled + * <11..10> 11 BRG1 Input is TxC Pin + * <9..8> 11 BRG0 Input is TxC Pin + * <7..6> 01 DPLL Input is BRG1 Output + * <5..3> XXX TxCLK comes from Port 0 + * <2..0> XXX RxCLK comes from Port 1 + * + * 0000 1111 0111 0111 = 0x0f77 + */ + + RegValue = 0x0f40; + + if ( info->params.flags & HDLC_FLAG_RXC_DPLL ) + RegValue |= 0x0003; /* RxCLK from DPLL */ + else if ( info->params.flags & HDLC_FLAG_RXC_BRG ) + RegValue |= 0x0004; /* RxCLK from BRG0 */ + else if ( info->params.flags & HDLC_FLAG_RXC_TXCPIN) + RegValue |= 0x0006; /* RxCLK from TXC Input */ + else + RegValue |= 0x0007; /* RxCLK from Port1 */ + + if ( info->params.flags & HDLC_FLAG_TXC_DPLL ) + RegValue |= 0x0018; /* TxCLK from DPLL */ + else if ( info->params.flags & HDLC_FLAG_TXC_BRG ) + RegValue |= 0x0020; /* TxCLK from BRG0 */ + else if ( info->params.flags & HDLC_FLAG_TXC_RXCPIN) + RegValue |= 0x0038; /* RxCLK from TXC Input */ + else + RegValue |= 0x0030; /* TxCLK from Port0 */ + + usc_OutReg( info, CMCR, RegValue ); + + + /* Hardware Configuration Register (HCR) + * + * <15..14> 00 CTR0 Divisor:00=32,01=16,10=8,11=4 + * <13> 0 CTR1DSel:0=CTR0Div determines CTR0Div + * <12> 0 CVOK:0=report code violation in biphase + * <11..10> 00 DPLL Divisor:00=32,01=16,10=8,11=4 + * <9..8> XX DPLL mode:00=disable,01=NRZ,10=Biphase,11=Biphase Level + * <7..6> 00 reserved + * <5> 0 BRG1 mode:0=continuous,1=single cycle + * <4> X BRG1 Enable + * <3..2> 00 reserved + * <1> 0 BRG0 mode:0=continuous,1=single cycle + * <0> 0 BRG0 Enable + */ + + RegValue = 0x0000; + + if ( info->params.flags & (HDLC_FLAG_RXC_DPLL + HDLC_FLAG_TXC_DPLL) ) { + u32 XtalSpeed; + u32 DpllDivisor; + u16 Tc; + + /* DPLL is enabled. Use BRG1 to provide continuous reference clock */ + /* for DPLL. DPLL mode in HCR is dependent on the encoding used. */ + + if ( info->bus_type == MGSL_BUS_TYPE_PCI ) + XtalSpeed = 11059200; + else + XtalSpeed = 14745600; + + if ( info->params.flags & HDLC_FLAG_DPLL_DIV16 ) { + DpllDivisor = 16; + RegValue |= BIT10; + } + else if ( info->params.flags & HDLC_FLAG_DPLL_DIV8 ) { + DpllDivisor = 8; + RegValue |= BIT11; + } + else + DpllDivisor = 32; + + /* Tc = (Xtal/Speed) - 1 */ + /* If twice the remainder of (Xtal/Speed) is greater than Speed */ + /* then rounding up gives a more precise time constant. Instead */ + /* of rounding up and then subtracting 1 we just don't subtract */ + /* the one in this case. */ + + /*-------------------------------------------------- + * ejz: for DPLL mode, application should use the + * same clock speed as the partner system, even + * though clocking is derived from the input RxData. + * In case the user uses a 0 for the clock speed, + * default to 0xffffffff and don't try to divide by + * zero + *--------------------------------------------------*/ + if ( info->params.clock_speed ) + { + Tc = (u16)((XtalSpeed/DpllDivisor)/info->params.clock_speed); + if ( !((((XtalSpeed/DpllDivisor) % info->params.clock_speed) * 2) + / info->params.clock_speed) ) + Tc--; + } + else + Tc = -1; + + + /* Write 16-bit Time Constant for BRG1 */ + usc_OutReg( info, TC1R, Tc ); + + RegValue |= BIT4; /* enable BRG1 */ + + switch ( info->params.encoding ) { + case HDLC_ENCODING_NRZ: + case HDLC_ENCODING_NRZB: + case HDLC_ENCODING_NRZI_MARK: + case HDLC_ENCODING_NRZI_SPACE: RegValue |= BIT8; break; + case HDLC_ENCODING_BIPHASE_MARK: + case HDLC_ENCODING_BIPHASE_SPACE: RegValue |= BIT9; break; + case HDLC_ENCODING_BIPHASE_LEVEL: + case HDLC_ENCODING_DIFF_BIPHASE_LEVEL: RegValue |= BIT9 + BIT8; break; + } + } + + usc_OutReg( info, HCR, RegValue ); + + + /* Channel Control/status Register (CCSR) + * + * <15> X RCC FIFO Overflow status (RO) + * <14> X RCC FIFO Not Empty status (RO) + * <13> 0 1 = Clear RCC FIFO (WO) + * <12> X DPLL Sync (RW) + * <11> X DPLL 2 Missed Clocks status (RO) + * <10> X DPLL 1 Missed Clock status (RO) + * <9..8> 00 DPLL Resync on rising and falling edges (RW) + * <7> X SDLC Loop On status (RO) + * <6> X SDLC Loop Send status (RO) + * <5> 1 Bypass counters for TxClk and RxClk (RW) + * <4..2> 000 Last Char of SDLC frame has 8 bits (RW) + * <1..0> 00 reserved + * + * 0000 0000 0010 0000 = 0x0020 + */ + + usc_OutReg( info, CCSR, 0x1020 ); + + + if ( info->params.flags & HDLC_FLAG_AUTO_CTS ) { + usc_OutReg( info, SICR, + (u16)(usc_InReg(info,SICR) | SICR_CTS_INACTIVE) ); + } + + + /* enable Master Interrupt Enable bit (MIE) */ + usc_EnableMasterIrqBit( info ); + + usc_ClearIrqPendingBits( info, RECEIVE_STATUS + RECEIVE_DATA + + TRANSMIT_STATUS + TRANSMIT_DATA ); + + info->mbre_bit = 0; + outw( 0, info->io_base ); /* clear Master Bus Enable (DCAR) */ + usc_DmaCmd( info, DmaCmd_ResetAllChannels ); /* disable both DMA channels */ + info->mbre_bit = BIT8; + outw( BIT8, info->io_base ); /* set Master Bus Enable (DCAR) */ + + /* Enable DMAEN (Port 7, Bit 14) */ + /* This connects the DMA request signal to the ISA bus */ + /* on the ISA adapter. This has no effect for the PCI adapter */ + usc_OutReg( info, PCR, (u16)((usc_InReg(info, PCR) | BIT15) & ~BIT14) ); + + /* DMA Control Register (DCR) + * + * <15..14> 10 Priority mode = Alternating Tx/Rx + * 01 Rx has priority + * 00 Tx has priority + * + * <13> 1 Enable Priority Preempt per DCR<15..14> + * (WARNING DCR<11..10> must be 00 when this is 1) + * 0 Choose activate channel per DCR<11..10> + * + * <12> 0 Little Endian for Array/List + * <11..10> 00 Both Channels can use each bus grant + * <9..6> 0000 reserved + * <5> 0 7 CLK - Minimum Bus Re-request Interval + * <4> 0 1 = drive D/C and S/D pins + * <3> 1 1 = Add one wait state to all DMA cycles. + * <2> 0 1 = Strobe /UAS on every transfer. + * <1..0> 11 Addr incrementing only affects LS24 bits + * + * 0110 0000 0000 1011 = 0x600b + */ + + if ( info->bus_type == MGSL_BUS_TYPE_PCI ) { + /* PCI adapter does not need DMA wait state */ + usc_OutDmaReg( info, DCR, 0xa00b ); + } + else + usc_OutDmaReg( info, DCR, 0x800b ); + + + /* Receive DMA mode Register (RDMR) + * + * <15..14> 11 DMA mode = Linked List Buffer mode + * <13> 1 RSBinA/L = store Rx status Block in Arrary/List entry + * <12> 1 Clear count of List Entry after fetching + * <11..10> 00 Address mode = Increment + * <9> 1 Terminate Buffer on RxBound + * <8> 0 Bus Width = 16bits + * <7..0> ? status Bits (write as 0s) + * + * 1111 0010 0000 0000 = 0xf200 + */ + + usc_OutDmaReg( info, RDMR, 0xf200 ); + + + /* Transmit DMA mode Register (TDMR) + * + * <15..14> 11 DMA mode = Linked List Buffer mode + * <13> 1 TCBinA/L = fetch Tx Control Block from List entry + * <12> 1 Clear count of List Entry after fetching + * <11..10> 00 Address mode = Increment + * <9> 1 Terminate Buffer on end of frame + * <8> 0 Bus Width = 16bits + * <7..0> ? status Bits (Read Only so write as 0) + * + * 1111 0010 0000 0000 = 0xf200 + */ + + usc_OutDmaReg( info, TDMR, 0xf200 ); + + + /* DMA Interrupt Control Register (DICR) + * + * <15> 1 DMA Interrupt Enable + * <14> 0 1 = Disable IEO from USC + * <13> 0 1 = Don't provide vector during IntAck + * <12> 1 1 = Include status in Vector + * <10..2> 0 reserved, Must be 0s + * <1> 0 1 = Rx DMA Interrupt Enabled + * <0> 0 1 = Tx DMA Interrupt Enabled + * + * 1001 0000 0000 0000 = 0x9000 + */ + + usc_OutDmaReg( info, DICR, 0x9000 ); + + usc_InDmaReg( info, RDMR ); /* clear pending receive DMA IRQ bits */ + usc_InDmaReg( info, TDMR ); /* clear pending transmit DMA IRQ bits */ + usc_OutDmaReg( info, CDIR, 0x0303 ); /* clear IUS and Pending for Tx and Rx */ + + /* Channel Control Register (CCR) + * + * <15..14> 10 Use 32-bit Tx Control Blocks (TCBs) + * <13> 0 Trigger Tx on SW Command Disabled + * <12> 0 Flag Preamble Disabled + * <11..10> 00 Preamble Length + * <9..8> 00 Preamble Pattern + * <7..6> 10 Use 32-bit Rx status Blocks (RSBs) + * <5> 0 Trigger Rx on SW Command Disabled + * <4..0> 0 reserved + * + * 1000 0000 1000 0000 = 0x8080 + */ + + RegValue = 0x8080; + + switch ( info->params.preamble_length ) { + case HDLC_PREAMBLE_LENGTH_16BITS: RegValue |= BIT10; break; + case HDLC_PREAMBLE_LENGTH_32BITS: RegValue |= BIT11; break; + case HDLC_PREAMBLE_LENGTH_64BITS: RegValue |= BIT11 + BIT10; break; + } + + switch ( info->params.preamble ) { + case HDLC_PREAMBLE_PATTERN_FLAGS: RegValue |= BIT8 + BIT12; break; + case HDLC_PREAMBLE_PATTERN_ONES: RegValue |= BIT8; break; + case HDLC_PREAMBLE_PATTERN_10: RegValue |= BIT9; break; + case HDLC_PREAMBLE_PATTERN_01: RegValue |= BIT9 + BIT8; break; + } + + usc_OutReg( info, CCR, RegValue ); + + + /* + * Burst/Dwell Control Register + * + * <15..8> 0x20 Maximum number of transfers per bus grant + * <7..0> 0x00 Maximum number of clock cycles per bus grant + */ + + if ( info->bus_type == MGSL_BUS_TYPE_PCI ) { + /* don't limit bus occupancy on PCI adapter */ + usc_OutDmaReg( info, BDCR, 0x0000 ); + } + else + usc_OutDmaReg( info, BDCR, 0x2000 ); + + usc_stop_transmitter(info); + usc_stop_receiver(info); + +} /* end of usc_set_sdlc_mode() */ + +/* usc_enable_loopback() + * + * Set the 16C32 for internal loopback mode. + * The TxCLK and RxCLK signals are generated from the BRG0 and + * the TxD is looped back to the RxD internally. + * + * Arguments: info pointer to device instance data + * enable 1 = enable loopback, 0 = disable + * Return Value: None + */ +void usc_enable_loopback(struct mgsl_struct *info, int enable) +{ + if (enable) { + /* blank external TXD output */ + usc_OutReg(info,IOCR,usc_InReg(info,IOCR) | (BIT7+BIT6)); + + /* Clock mode Control Register (CMCR) + * + * <15..14> 00 counter 1 Disabled + * <13..12> 00 counter 0 Disabled + * <11..10> 11 BRG1 Input is TxC Pin + * <9..8> 11 BRG0 Input is TxC Pin + * <7..6> 01 DPLL Input is BRG1 Output + * <5..3> 100 TxCLK comes from BRG0 + * <2..0> 100 RxCLK comes from BRG0 + * + * 0000 1111 0110 0100 = 0x0f64 + */ + + usc_OutReg( info, CMCR, 0x0f64 ); + + /* Write 16-bit Time Constant for BRG0 */ + /* use clock speed if available, otherwise use 8 for diagnostics */ + if (info->params.clock_speed) { + if (info->bus_type == MGSL_BUS_TYPE_PCI) + usc_OutReg(info, TC0R, (u16)((11059200/info->params.clock_speed)-1)); + else + usc_OutReg(info, TC0R, (u16)((14745600/info->params.clock_speed)-1)); + } else + usc_OutReg(info, TC0R, (u16)8); + + /* Hardware Configuration Register (HCR) Clear Bit 1, BRG0 + mode = Continuous Set Bit 0 to enable BRG0. */ + usc_OutReg( info, HCR, (u16)((usc_InReg( info, HCR ) & ~BIT1) | BIT0) ); + + /* Input/Output Control Reg, <2..0> = 100, Drive RxC pin with BRG0 */ + usc_OutReg(info, IOCR, (u16)((usc_InReg(info, IOCR) & 0xfff8) | 0x0004)); + + /* set Internal Data loopback mode */ + info->loopback_bits = 0x300; + outw( 0x0300, info->io_base + CCAR ); + } else { + /* enable external TXD output */ + usc_OutReg(info,IOCR,usc_InReg(info,IOCR) & ~(BIT7+BIT6)); + + /* clear Internal Data loopback mode */ + info->loopback_bits = 0; + outw( 0,info->io_base + CCAR ); + } + +} /* end of usc_enable_loopback() */ + +/* usc_enable_aux_clock() + * + * Enabled the AUX clock output at the specified frequency. + * + * Arguments: + * + * info pointer to device extension + * data_rate data rate of clock in bits per second + * A data rate of 0 disables the AUX clock. + * + * Return Value: None + */ +void usc_enable_aux_clock( struct mgsl_struct *info, u32 data_rate ) +{ + u32 XtalSpeed; + u16 Tc; + + if ( data_rate ) { + if ( info->bus_type == MGSL_BUS_TYPE_PCI ) + XtalSpeed = 11059200; + else + XtalSpeed = 14745600; + + + /* Tc = (Xtal/Speed) - 1 */ + /* If twice the remainder of (Xtal/Speed) is greater than Speed */ + /* then rounding up gives a more precise time constant. Instead */ + /* of rounding up and then subtracting 1 we just don't subtract */ + /* the one in this case. */ + + + Tc = (u16)(XtalSpeed/data_rate); + if ( !(((XtalSpeed % data_rate) * 2) / data_rate) ) + Tc--; + + /* Write 16-bit Time Constant for BRG0 */ + usc_OutReg( info, TC0R, Tc ); + + /* + * Hardware Configuration Register (HCR) + * Clear Bit 1, BRG0 mode = Continuous + * Set Bit 0 to enable BRG0. + */ + + usc_OutReg( info, HCR, (u16)((usc_InReg( info, HCR ) & ~BIT1) | BIT0) ); + + /* Input/Output Control Reg, <2..0> = 100, Drive RxC pin with BRG0 */ + usc_OutReg( info, IOCR, (u16)((usc_InReg(info, IOCR) & 0xfff8) | 0x0004) ); + } else { + /* data rate == 0 so turn off BRG0 */ + usc_OutReg( info, HCR, (u16)(usc_InReg( info, HCR ) & ~BIT0) ); + } + +} /* end of usc_enable_aux_clock() */ + +/* + * + * usc_process_rxoverrun_sync() + * + * This function processes a receive overrun by resetting the + * receive DMA buffers and issuing a Purge Rx FIFO command + * to allow the receiver to continue receiving. + * + * Arguments: + * + * info pointer to device extension + * + * Return Value: None + */ +void usc_process_rxoverrun_sync( struct mgsl_struct *info ) +{ + int start_index; + int end_index; + int frame_start_index; + int start_of_frame_found = FALSE; + int end_of_frame_found = FALSE; + int reprogram_dma = FALSE; + + DMABUFFERENTRY *buffer_list = info->rx_buffer_list; + u32 phys_addr; + + usc_DmaCmd( info, DmaCmd_PauseRxChannel ); + usc_RCmd( info, RCmd_EnterHuntmode ); + usc_RTCmd( info, RTCmd_PurgeRxFifo ); + + /* CurrentRxBuffer points to the 1st buffer of the next */ + /* possibly available receive frame. */ + + frame_start_index = start_index = end_index = info->current_rx_buffer; + + /* Search for an unfinished string of buffers. This means */ + /* that a receive frame started (at least one buffer with */ + /* count set to zero) but there is no terminiting buffer */ + /* (status set to non-zero). */ + + while( !buffer_list[end_index].count ) + { + /* Count field has been reset to zero by 16C32. */ + /* This buffer is currently in use. */ + + if ( !start_of_frame_found ) + { + start_of_frame_found = TRUE; + frame_start_index = end_index; + end_of_frame_found = FALSE; + } + + if ( buffer_list[end_index].status ) + { + /* Status field has been set by 16C32. */ + /* This is the last buffer of a received frame. */ + + /* We want to leave the buffers for this frame intact. */ + /* Move on to next possible frame. */ + + start_of_frame_found = FALSE; + end_of_frame_found = TRUE; + } + + /* advance to next buffer entry in linked list */ + end_index++; + if ( end_index == info->rx_buffer_count ) + end_index = 0; + + if ( start_index == end_index ) + { + /* The entire list has been searched with all Counts == 0 and */ + /* all Status == 0. The receive buffers are */ + /* completely screwed, reset all receive buffers! */ + mgsl_reset_rx_dma_buffers( info ); + frame_start_index = 0; + start_of_frame_found = FALSE; + reprogram_dma = TRUE; + break; + } + } + + if ( start_of_frame_found && !end_of_frame_found ) + { + /* There is an unfinished string of receive DMA buffers */ + /* as a result of the receiver overrun. */ + + /* Reset the buffers for the unfinished frame */ + /* and reprogram the receive DMA controller to start */ + /* at the 1st buffer of unfinished frame. */ + + start_index = frame_start_index; + + do + { + *((unsigned long *)&(info->rx_buffer_list[start_index++].count)) = DMABUFFERSIZE; + + /* Adjust index for wrap around. */ + if ( start_index == info->rx_buffer_count ) + start_index = 0; + + } while( start_index != end_index ); + + reprogram_dma = TRUE; + } + + if ( reprogram_dma ) + { + usc_UnlatchRxstatusBits(info,RXSTATUS_ALL); + usc_ClearIrqPendingBits(info, RECEIVE_DATA|RECEIVE_STATUS); + usc_UnlatchRxstatusBits(info, RECEIVE_DATA|RECEIVE_STATUS); + + usc_EnableReceiver(info,DISABLE_UNCONDITIONAL); + + /* This empties the receive FIFO and loads the RCC with RCLR */ + usc_OutReg( info, CCSR, (u16)(usc_InReg(info,CCSR) | BIT13) ); + + /* program 16C32 with physical address of 1st DMA buffer entry */ + phys_addr = info->rx_buffer_list[frame_start_index].phys_entry; + usc_OutDmaReg( info, NRARL, (u16)phys_addr ); + usc_OutDmaReg( info, NRARU, (u16)(phys_addr >> 16) ); + + usc_UnlatchRxstatusBits( info, RXSTATUS_ALL ); + usc_ClearIrqPendingBits( info, RECEIVE_DATA + RECEIVE_STATUS ); + usc_EnableInterrupts( info, RECEIVE_STATUS ); + + /* 1. Arm End of Buffer (EOB) Receive DMA Interrupt (BIT2 of RDIAR) */ + /* 2. Enable Receive DMA Interrupts (BIT1 of DICR) */ + + usc_OutDmaReg( info, RDIAR, BIT3 + BIT2 ); + usc_OutDmaReg( info, DICR, (u16)(usc_InDmaReg(info,DICR) | BIT1) ); + usc_DmaCmd( info, DmaCmd_InitRxChannel ); + if ( info->params.flags & HDLC_FLAG_AUTO_DCD ) + usc_EnableReceiver(info,ENABLE_AUTO_DCD); + else + usc_EnableReceiver(info,ENABLE_UNCONDITIONAL); + } + else + { + /* This empties the receive FIFO and loads the RCC with RCLR */ + usc_OutReg( info, CCSR, (u16)(usc_InReg(info,CCSR) | BIT13) ); + usc_RTCmd( info, RTCmd_PurgeRxFifo ); + } + +} /* end of usc_process_rxoverrun_sync() */ + +/* usc_stop_receiver() + * + * Disable USC receiver + * + * Arguments: info pointer to device instance data + * Return Value: None + */ +void usc_stop_receiver( struct mgsl_struct *info ) +{ + if (debug_level >= DEBUG_LEVEL_ISR) + printk("%s(%d):usc_stop_receiver(%s)\n", + __FILE__,__LINE__, info->device_name ); + + /* Disable receive DMA channel. */ + /* This also disables receive DMA channel interrupts */ + usc_DmaCmd( info, DmaCmd_ResetRxChannel ); + + usc_UnlatchRxstatusBits( info, RXSTATUS_ALL ); + usc_ClearIrqPendingBits( info, RECEIVE_DATA + RECEIVE_STATUS ); + usc_DisableInterrupts( info, RECEIVE_DATA + RECEIVE_STATUS ); + + usc_EnableReceiver(info,DISABLE_UNCONDITIONAL); + + /* This empties the receive FIFO and loads the RCC with RCLR */ + usc_OutReg( info, CCSR, (u16)(usc_InReg(info,CCSR) | BIT13) ); + usc_RTCmd( info, RTCmd_PurgeRxFifo ); + + info->rx_enabled = 0; + info->rx_overflow = 0; + +} /* end of stop_receiver() */ + +/* usc_start_receiver() + * + * Enable the USC receiver + * + * Arguments: info pointer to device instance data + * Return Value: None + */ +void usc_start_receiver( struct mgsl_struct *info ) +{ + u32 phys_addr; + + if (debug_level >= DEBUG_LEVEL_ISR) + printk("%s(%d):usc_start_receiver(%s)\n", + __FILE__,__LINE__, info->device_name ); + + mgsl_reset_rx_dma_buffers( info ); + usc_stop_receiver( info ); + + usc_OutReg( info, CCSR, (u16)(usc_InReg(info,CCSR) | BIT13) ); + usc_RTCmd( info, RTCmd_PurgeRxFifo ); + + if ( info->params.mode == MGSL_MODE_HDLC || + info->params.mode == MGSL_MODE_RAW ) { + /* DMA mode Transfers */ + /* Program the DMA controller. */ + /* Enable the DMA controller end of buffer interrupt. */ + + /* program 16C32 with physical address of 1st DMA buffer entry */ + phys_addr = info->rx_buffer_list[0].phys_entry; + usc_OutDmaReg( info, NRARL, (u16)phys_addr ); + usc_OutDmaReg( info, NRARU, (u16)(phys_addr >> 16) ); + + usc_UnlatchRxstatusBits( info, RXSTATUS_ALL ); + usc_ClearIrqPendingBits( info, RECEIVE_DATA + RECEIVE_STATUS ); + usc_EnableInterrupts( info, RECEIVE_STATUS ); + + /* 1. Arm End of Buffer (EOB) Receive DMA Interrupt (BIT2 of RDIAR) */ + /* 2. Enable Receive DMA Interrupts (BIT1 of DICR) */ + + usc_OutDmaReg( info, RDIAR, BIT3 + BIT2 ); + usc_OutDmaReg( info, DICR, (u16)(usc_InDmaReg(info,DICR) | BIT1) ); + usc_DmaCmd( info, DmaCmd_InitRxChannel ); + if ( info->params.flags & HDLC_FLAG_AUTO_DCD ) + usc_EnableReceiver(info,ENABLE_AUTO_DCD); + else + usc_EnableReceiver(info,ENABLE_UNCONDITIONAL); + } else { + usc_UnlatchRxstatusBits(info, RXSTATUS_ALL); + usc_ClearIrqPendingBits(info, RECEIVE_DATA + RECEIVE_STATUS); + usc_EnableInterrupts(info, RECEIVE_DATA); + + usc_RTCmd( info, RTCmd_PurgeRxFifo ); + usc_RCmd( info, RCmd_EnterHuntmode ); + + usc_EnableReceiver(info,ENABLE_UNCONDITIONAL); + } + + usc_OutReg( info, CCSR, 0x1020 ); + + info->rx_enabled = 1; + +} /* end of usc_start_receiver() */ + +/* usc_start_transmitter() + * + * Enable the USC transmitter and send a transmit frame if + * one is loaded in the DMA buffers. + * + * Arguments: info pointer to device instance data + * Return Value: None + */ +void usc_start_transmitter( struct mgsl_struct *info ) +{ + u32 phys_addr; + unsigned int FrameSize; + + if (debug_level >= DEBUG_LEVEL_ISR) + printk("%s(%d):usc_start_transmitter(%s)\n", + __FILE__,__LINE__, info->device_name ); + + if ( info->xmit_cnt ) { + + /* If auto RTS enabled and RTS is inactive, then assert */ + /* RTS and set a flag indicating that the driver should */ + /* negate RTS when the transmission completes. */ + + info->drop_rts_on_tx_done = 0; + + if ( info->params.flags & HDLC_FLAG_AUTO_RTS ) { + usc_get_serial_signals( info ); + if ( !(info->serial_signals & SerialSignal_RTS) ) { + info->serial_signals |= SerialSignal_RTS; + usc_set_serial_signals( info ); + info->drop_rts_on_tx_done = 1; + } + } + + + if ( info->params.mode == MGSL_MODE_ASYNC ) { + if ( !info->tx_active ) { + usc_UnlatchTxstatusBits(info, TXSTATUS_ALL); + usc_ClearIrqPendingBits(info, TRANSMIT_STATUS + TRANSMIT_DATA); + usc_EnableInterrupts(info, TRANSMIT_DATA); + usc_load_txfifo(info); + } + } else { + /* Disable transmit DMA controller while programming. */ + usc_DmaCmd( info, DmaCmd_ResetTxChannel ); + + /* Transmit DMA buffer is loaded, so program USC */ + /* to send the frame contained in the buffers. */ + + FrameSize = info->tx_buffer_list[info->start_tx_dma_buffer].rcc; + + /* if operating in Raw sync mode, reset the rcc component + * of the tx dma buffer entry, otherwise, the serial controller + * will send a closing sync char after this count. + */ + if ( info->params.mode == MGSL_MODE_RAW ) + info->tx_buffer_list[info->start_tx_dma_buffer].rcc = 0; + + /* Program the Transmit Character Length Register (TCLR) */ + /* and clear FIFO (TCC is loaded with TCLR on FIFO clear) */ + usc_OutReg( info, TCLR, (u16)FrameSize ); + + usc_RTCmd( info, RTCmd_PurgeTxFifo ); + + /* Program the address of the 1st DMA Buffer Entry in linked list */ + phys_addr = info->tx_buffer_list[info->start_tx_dma_buffer].phys_entry; + usc_OutDmaReg( info, NTARL, (u16)phys_addr ); + usc_OutDmaReg( info, NTARU, (u16)(phys_addr >> 16) ); + + usc_UnlatchTxstatusBits( info, TXSTATUS_ALL ); + usc_ClearIrqPendingBits( info, TRANSMIT_STATUS ); + usc_EnableInterrupts( info, TRANSMIT_STATUS ); + + if ( info->params.mode == MGSL_MODE_RAW && + info->num_tx_dma_buffers > 1 ) { + /* When running external sync mode, attempt to 'stream' transmit */ + /* by filling tx dma buffers as they become available. To do this */ + /* we need to enable Tx DMA EOB Status interrupts : */ + /* */ + /* 1. Arm End of Buffer (EOB) Transmit DMA Interrupt (BIT2 of TDIAR) */ + /* 2. Enable Transmit DMA Interrupts (BIT0 of DICR) */ + + usc_OutDmaReg( info, TDIAR, BIT2|BIT3 ); + usc_OutDmaReg( info, DICR, (u16)(usc_InDmaReg(info,DICR) | BIT0) ); + } + + /* Initialize Transmit DMA Channel */ + usc_DmaCmd( info, DmaCmd_InitTxChannel ); + + usc_TCmd( info, TCmd_SendFrame ); + + info->tx_timer.expires = jiffies + jiffies_from_ms(5000); + add_timer(&info->tx_timer); + } + info->tx_active = 1; + } + + if ( !info->tx_enabled ) { + info->tx_enabled = 1; + if ( info->params.flags & HDLC_FLAG_AUTO_CTS ) + usc_EnableTransmitter(info,ENABLE_AUTO_CTS); + else + usc_EnableTransmitter(info,ENABLE_UNCONDITIONAL); + } + +} /* end of usc_start_transmitter() */ + +/* usc_stop_transmitter() + * + * Stops the transmitter and DMA + * + * Arguments: info pointer to device isntance data + * Return Value: None + */ +void usc_stop_transmitter( struct mgsl_struct *info ) +{ + if (debug_level >= DEBUG_LEVEL_ISR) + printk("%s(%d):usc_stop_transmitter(%s)\n", + __FILE__,__LINE__, info->device_name ); + + del_timer(&info->tx_timer); + + usc_UnlatchTxstatusBits( info, TXSTATUS_ALL ); + usc_ClearIrqPendingBits( info, TRANSMIT_STATUS + TRANSMIT_DATA ); + usc_DisableInterrupts( info, TRANSMIT_STATUS + TRANSMIT_DATA ); + + usc_EnableTransmitter(info,DISABLE_UNCONDITIONAL); + usc_DmaCmd( info, DmaCmd_ResetTxChannel ); + usc_RTCmd( info, RTCmd_PurgeTxFifo ); + + info->tx_enabled = 0; + info->tx_active = 0; + +} /* end of usc_stop_transmitter() */ + +/* usc_load_txfifo() + * + * Fill the transmit FIFO until the FIFO is full or + * there is no more data to load. + * + * Arguments: info pointer to device extension (instance data) + * Return Value: None + */ +void usc_load_txfifo( struct mgsl_struct *info ) +{ + int Fifocount; + u8 TwoBytes[2]; + + if ( !info->xmit_cnt && !info->x_char ) + return; + + /* Select transmit FIFO status readback in TICR */ + usc_TCmd( info, TCmd_SelectTicrTxFifostatus ); + + /* load the Transmit FIFO until FIFOs full or all data sent */ + + while( (Fifocount = usc_InReg(info, TICR) >> 8) && info->xmit_cnt ) { + /* there is more space in the transmit FIFO and */ + /* there is more data in transmit buffer */ + + if ( (info->xmit_cnt > 1) && (Fifocount > 1) && !info->x_char ) { + /* write a 16-bit word from transmit buffer to 16C32 */ + + TwoBytes[0] = info->xmit_buf[info->xmit_tail++]; + info->xmit_tail = info->xmit_tail & (SERIAL_XMIT_SIZE-1); + TwoBytes[1] = info->xmit_buf[info->xmit_tail++]; + info->xmit_tail = info->xmit_tail & (SERIAL_XMIT_SIZE-1); + + outw( *((u16 *)TwoBytes), info->io_base + DATAREG); + + info->xmit_cnt -= 2; + info->icount.tx += 2; + } else { + /* only 1 byte left to transmit or 1 FIFO slot left */ + + outw( (inw( info->io_base + CCAR) & 0x0780) | (TDR+LSBONLY), + info->io_base + CCAR ); + + if (info->x_char) { + /* transmit pending high priority char */ + outw( info->x_char,info->io_base + CCAR ); + info->x_char = 0; + } else { + outw( info->xmit_buf[info->xmit_tail++],info->io_base + CCAR ); + info->xmit_tail = info->xmit_tail & (SERIAL_XMIT_SIZE-1); + info->xmit_cnt--; + } + info->icount.tx++; + } + } + +} /* end of usc_load_txfifo() */ + +/* usc_reset() + * + * Reset the adapter to a known state and prepare it for further use. + * + * Arguments: info pointer to device instance data + * Return Value: None + */ +void usc_reset( struct mgsl_struct *info ) +{ + if ( info->bus_type == MGSL_BUS_TYPE_PCI ) { + int i; + u32 readval; + + /* Set BIT30 of Misc Control Register */ + /* (Local Control Register 0x50) to force reset of USC. */ + + volatile u32 *MiscCtrl = (u32 *)(info->lcr_base + 0x50); + u32 *LCR0BRDR = (u32 *)(info->lcr_base + 0x28); + + info->misc_ctrl_value |= BIT30; + *MiscCtrl = info->misc_ctrl_value; + + /* + * Force at least 170ns delay before clearing + * reset bit. Each read from LCR takes at least + * 30ns so 10 times for 300ns to be safe. + */ + for(i=0;i<10;i++) + readval = *MiscCtrl; + + info->misc_ctrl_value &= ~BIT30; + *MiscCtrl = info->misc_ctrl_value; + + *LCR0BRDR = BUS_DESCRIPTOR( + 1, // Write Strobe Hold (0-3) + 2, // Write Strobe Delay (0-3) + 2, // Read Strobe Delay (0-3) + 0, // NWDD (Write data-data) (0-3) + 4, // NWAD (Write Addr-data) (0-31) + 0, // NXDA (Read/Write Data-Addr) (0-3) + 0, // NRDD (Read Data-Data) (0-3) + 5 // NRAD (Read Addr-Data) (0-31) + ); + } else { + /* do HW reset */ + outb( 0,info->io_base + 8 ); + } + + info->mbre_bit = 0; + info->loopback_bits = 0; + info->usc_idle_mode = 0; + + /* + * Program the Bus Configuration Register (BCR) + * + * <15> 0 Don't use seperate address + * <14..6> 0 reserved + * <5..4> 00 IAckmode = Default, don't care + * <3> 1 Bus Request Totem Pole output + * <2> 1 Use 16 Bit data bus + * <1> 0 IRQ Totem Pole output + * <0> 0 Don't Shift Right Addr + * + * 0000 0000 0000 1100 = 0x000c + * + * By writing to io_base + SDPIN the Wait/Ack pin is + * programmed to work as a Wait pin. + */ + + outw( 0x000c,info->io_base + SDPIN ); + + + outw( 0,info->io_base ); + outw( 0,info->io_base + CCAR ); + + /* select little endian byte ordering */ + usc_RTCmd( info, RTCmd_SelectLittleEndian ); + + + /* Port Control Register (PCR) + * + * <15..14> 11 Port 7 is Output (~DMAEN, Bit 14 : 0 = Enabled) + * <13..12> 11 Port 6 is Output (~INTEN, Bit 12 : 0 = Enabled) + * <11..10> 00 Port 5 is Input (No Connect, Don't Care) + * <9..8> 00 Port 4 is Input (No Connect, Don't Care) + * <7..6> 11 Port 3 is Output (~RTS, Bit 6 : 0 = Enabled ) + * <5..4> 11 Port 2 is Output (~DTR, Bit 4 : 0 = Enabled ) + * <3..2> 01 Port 1 is Input (Dedicated RxC) + * <1..0> 01 Port 0 is Input (Dedicated TxC) + * + * 1111 0000 1111 0101 = 0xf0f5 + */ + + usc_OutReg( info, PCR, 0xf0f5 ); + + + /* + * Input/Output Control Register + * + * <15..14> 00 CTS is active low input + * <13..12> 00 DCD is active low input + * <11..10> 00 TxREQ pin is input (DSR) + * <9..8> 00 RxREQ pin is input (RI) + * <7..6> 00 TxD is output (Transmit Data) + * <5..3> 000 TxC Pin in Input (14.7456MHz Clock) + * <2..0> 100 RxC is Output (drive with BRG0) + * + * 0000 0000 0000 0100 = 0x0004 + */ + + usc_OutReg( info, IOCR, 0x0004 ); + +} /* end of usc_reset() */ + +/* usc_set_async_mode() + * + * Program adapter for asynchronous communications. + * + * Arguments: info pointer to device instance data + * Return Value: None + */ +void usc_set_async_mode( struct mgsl_struct *info ) +{ + u16 RegValue; + + /* disable interrupts while programming USC */ + usc_DisableMasterIrqBit( info ); + + outw( 0, info->io_base ); /* clear Master Bus Enable (DCAR) */ + usc_DmaCmd( info, DmaCmd_ResetAllChannels ); /* disable both DMA channels */ + + usc_loopback_frame( info ); + + /* Channel mode Register (CMR) + * + * <15..14> 00 Tx Sub modes, 00 = 1 Stop Bit + * <13..12> 00 00 = 16X Clock + * <11..8> 0000 Transmitter mode = Asynchronous + * <7..6> 00 reserved? + * <5..4> 00 Rx Sub modes, 00 = 16X Clock + * <3..0> 0000 Receiver mode = Asynchronous + * + * 0000 0000 0000 0000 = 0x0 + */ + + RegValue = 0; + if ( info->params.stop_bits != 1 ) + RegValue |= BIT14; + usc_OutReg( info, CMR, RegValue ); + + + /* Receiver mode Register (RMR) + * + * <15..13> 000 encoding = None + * <12..08> 00000 reserved (Sync Only) + * <7..6> 00 Even parity + * <5> 0 parity disabled + * <4..2> 000 Receive Char Length = 8 bits + * <1..0> 00 Disable Receiver + * + * 0000 0000 0000 0000 = 0x0 + */ + + RegValue = 0; + + if ( info->params.data_bits != 8 ) + RegValue |= BIT4+BIT3+BIT2; + + if ( info->params.parity != ASYNC_PARITY_NONE ) { + RegValue |= BIT5; + if ( info->params.parity != ASYNC_PARITY_ODD ) + RegValue |= BIT6; + } + + usc_OutReg( info, RMR, RegValue ); + + + /* Set IRQ trigger level */ + + usc_RCmd( info, RCmd_SelectRicrIntLevel ); + + + /* Receive Interrupt Control Register (RICR) + * + * <15..8> ? RxFIFO IRQ Request Level + * + * Note: For async mode the receive FIFO level must be set + * to 0 to aviod the situation where the FIFO contains fewer bytes + * than the trigger level and no more data is expected. + * + * <7> 0 Exited Hunt IA (Interrupt Arm) + * <6> 0 Idle Received IA + * <5> 0 Break/Abort IA + * <4> 0 Rx Bound IA + * <3> 0 Queued status reflects oldest byte in FIFO + * <2> 0 Abort/PE IA + * <1> 0 Rx Overrun IA + * <0> 0 Select TC0 value for readback + * + * 0000 0000 0100 0000 = 0x0000 + (FIFOLEVEL in MSB) + */ + + usc_OutReg( info, RICR, 0x0000 ); + + usc_UnlatchRxstatusBits( info, RXSTATUS_ALL ); + usc_ClearIrqPendingBits( info, RECEIVE_STATUS ); + + + /* Transmit mode Register (TMR) + * + * <15..13> 000 encoding = None + * <12..08> 00000 reserved (Sync Only) + * <7..6> 00 Transmit parity Even + * <5> 0 Transmit parity Disabled + * <4..2> 000 Tx Char Length = 8 bits + * <1..0> 00 Disable Transmitter + * + * 0000 0000 0000 0000 = 0x0 + */ + + RegValue = 0; + + if ( info->params.data_bits != 8 ) + RegValue |= BIT4+BIT3+BIT2; + + if ( info->params.parity != ASYNC_PARITY_NONE ) { + RegValue |= BIT5; + if ( info->params.parity != ASYNC_PARITY_ODD ) + RegValue |= BIT6; + } + + usc_OutReg( info, TMR, RegValue ); + + usc_set_txidle( info ); + + + /* Set IRQ trigger level */ + + usc_TCmd( info, TCmd_SelectTicrIntLevel ); + + + /* Transmit Interrupt Control Register (TICR) + * + * <15..8> ? Transmit FIFO IRQ Level + * <7> 0 Present IA (Interrupt Arm) + * <6> 1 Idle Sent IA + * <5> 0 Abort Sent IA + * <4> 0 EOF/EOM Sent IA + * <3> 0 CRC Sent IA + * <2> 0 1 = Wait for SW Trigger to Start Frame + * <1> 0 Tx Underrun IA + * <0> 0 TC0 constant on read back + * + * 0000 0000 0100 0000 = 0x0040 + */ + + usc_OutReg( info, TICR, 0x1f40 ); + + usc_UnlatchTxstatusBits( info, TXSTATUS_ALL ); + usc_ClearIrqPendingBits( info, TRANSMIT_STATUS ); + + usc_enable_async_clock( info, info->params.data_rate ); + + + /* Channel Control/status Register (CCSR) + * + * <15> X RCC FIFO Overflow status (RO) + * <14> X RCC FIFO Not Empty status (RO) + * <13> 0 1 = Clear RCC FIFO (WO) + * <12> X DPLL in Sync status (RO) + * <11> X DPLL 2 Missed Clocks status (RO) + * <10> X DPLL 1 Missed Clock status (RO) + * <9..8> 00 DPLL Resync on rising and falling edges (RW) + * <7> X SDLC Loop On status (RO) + * <6> X SDLC Loop Send status (RO) + * <5> 1 Bypass counters for TxClk and RxClk (RW) + * <4..2> 000 Last Char of SDLC frame has 8 bits (RW) + * <1..0> 00 reserved + * + * 0000 0000 0010 0000 = 0x0020 + */ + + usc_OutReg( info, CCSR, 0x0020 ); + + usc_DisableInterrupts( info, TRANSMIT_STATUS + TRANSMIT_DATA + + RECEIVE_DATA + RECEIVE_STATUS ); + + usc_ClearIrqPendingBits( info, TRANSMIT_STATUS + TRANSMIT_DATA + + RECEIVE_DATA + RECEIVE_STATUS ); + + usc_EnableMasterIrqBit( info ); + + /* Enable INTEN (Port 6, Bit12) */ + /* This connects the IRQ request signal to the ISA bus */ + /* on the ISA adapter. This has no effect for the PCI adapter */ + usc_OutReg( info, PCR, (u16)((usc_InReg(info, PCR) | BIT13) & ~BIT12) ); + +} /* end of usc_set_async_mode() */ + +/* usc_loopback_frame() + * + * Loop back a small (2 byte) dummy SDLC frame. + * Interrupts and DMA are NOT used. The purpose of this is to + * clear any 'stale' status info left over from running in async mode. + * + * The 16C32 shows the strange behaviour of marking the 1st + * received SDLC frame with a CRC error even when there is no + * CRC error. To get around this a small dummy from of 2 bytes + * is looped back when switching from async to sync mode. + * + * Arguments: info pointer to device instance data + * Return Value: None + */ +void usc_loopback_frame( struct mgsl_struct *info ) +{ + int i; + unsigned long oldmode = info->params.mode; + + info->params.mode = MGSL_MODE_HDLC; + + usc_DisableMasterIrqBit( info ); + + usc_set_sdlc_mode( info ); + usc_enable_loopback( info, 1 ); + + /* Write 16-bit Time Constant for BRG0 */ + usc_OutReg( info, TC0R, 0 ); + + /* Channel Control Register (CCR) + * + * <15..14> 00 Don't use 32-bit Tx Control Blocks (TCBs) + * <13> 0 Trigger Tx on SW Command Disabled + * <12> 0 Flag Preamble Disabled + * <11..10> 00 Preamble Length = 8-Bits + * <9..8> 01 Preamble Pattern = flags + * <7..6> 10 Don't use 32-bit Rx status Blocks (RSBs) + * <5> 0 Trigger Rx on SW Command Disabled + * <4..0> 0 reserved + * + * 0000 0001 0000 0000 = 0x0100 + */ + + usc_OutReg( info, CCR, 0x0100 ); + + /* SETUP RECEIVER */ + usc_RTCmd( info, RTCmd_PurgeRxFifo ); + usc_EnableReceiver(info,ENABLE_UNCONDITIONAL); + + /* SETUP TRANSMITTER */ + /* Program the Transmit Character Length Register (TCLR) */ + /* and clear FIFO (TCC is loaded with TCLR on FIFO clear) */ + usc_OutReg( info, TCLR, 2 ); + usc_RTCmd( info, RTCmd_PurgeTxFifo ); + + /* unlatch Tx status bits, and start transmit channel. */ + usc_UnlatchTxstatusBits(info,TXSTATUS_ALL); + outw(0,info->io_base + DATAREG); + + /* ENABLE TRANSMITTER */ + usc_TCmd( info, TCmd_SendFrame ); + usc_EnableTransmitter(info,ENABLE_UNCONDITIONAL); + + /* WAIT FOR RECEIVE COMPLETE */ + for (i=0 ; i<1000 ; i++) + if (usc_InReg( info, RCSR ) & (BIT8 + BIT4 + BIT3 + BIT1)) + break; + + /* clear Internal Data loopback mode */ + usc_enable_loopback(info, 0); + + usc_EnableMasterIrqBit(info); + + info->params.mode = oldmode; + +} /* end of usc_loopback_frame() */ + +/* usc_set_sync_mode() Programs the USC for SDLC communications. + * + * Arguments: info pointer to adapter info structure + * Return Value: None + */ +void usc_set_sync_mode( struct mgsl_struct *info ) +{ + usc_loopback_frame( info ); + usc_set_sdlc_mode( info ); + + /* Enable INTEN (Port 6, Bit12) */ + /* This connects the IRQ request signal to the ISA bus */ + /* on the ISA adapter. This has no effect for the PCI adapter */ + usc_OutReg(info, PCR, (u16)((usc_InReg(info, PCR) | BIT13) & ~BIT12)); + + usc_enable_aux_clock(info, info->params.clock_speed); + + if (info->params.loopback) + usc_enable_loopback(info,1); + +} /* end of mgsl_set_sync_mode() */ + +/* usc_set_txidle() Set the HDLC idle mode for the transmitter. + * + * Arguments: info pointer to device instance data + * Return Value: None + */ +void usc_set_txidle( struct mgsl_struct *info ) +{ + u16 usc_idle_mode = IDLEMODE_FLAGS; + + /* Map API idle mode to USC register bits */ + + switch( info->idle_mode ){ + case HDLC_TXIDLE_FLAGS: usc_idle_mode = IDLEMODE_FLAGS; break; + case HDLC_TXIDLE_ALT_ZEROS_ONES: usc_idle_mode = IDLEMODE_ALT_ONE_ZERO; break; + case HDLC_TXIDLE_ZEROS: usc_idle_mode = IDLEMODE_ZERO; break; + case HDLC_TXIDLE_ONES: usc_idle_mode = IDLEMODE_ONE; break; + case HDLC_TXIDLE_ALT_MARK_SPACE: usc_idle_mode = IDLEMODE_ALT_MARK_SPACE; break; + case HDLC_TXIDLE_SPACE: usc_idle_mode = IDLEMODE_SPACE; break; + case HDLC_TXIDLE_MARK: usc_idle_mode = IDLEMODE_MARK; break; + } + + info->usc_idle_mode = usc_idle_mode; + //usc_OutReg(info, TCSR, usc_idle_mode); + info->tcsr_value &= ~IDLEMODE_MASK; /* clear idle mode bits */ + info->tcsr_value += usc_idle_mode; + usc_OutReg(info, TCSR, info->tcsr_value); + + /* + * if SyncLink WAN adapter is running in external sync mode, the + * transmitter has been set to Monosync in order to try to mimic + * a true raw outbound bit stream. Monosync still sends an open/close + * sync char at the start/end of a frame. Try to match those sync + * patterns to the idle mode set here + */ + if ( info->params.mode == MGSL_MODE_RAW ) { + unsigned char syncpat = 0; + switch( info->idle_mode ) { + case HDLC_TXIDLE_FLAGS: + syncpat = 0x7e; + break; + case HDLC_TXIDLE_ALT_ZEROS_ONES: + syncpat = 0x55; + break; + case HDLC_TXIDLE_ZEROS: + case HDLC_TXIDLE_SPACE: + syncpat = 0x00; + break; + case HDLC_TXIDLE_ONES: + case HDLC_TXIDLE_MARK: + syncpat = 0xff; + break; + case HDLC_TXIDLE_ALT_MARK_SPACE: + syncpat = 0xaa; + break; + } + + usc_SetTransmitSyncChars(info,syncpat,syncpat); + } + +} /* end of usc_set_txidle() */ + +/* usc_get_serial_signals() + * + * Query the adapter for the state of the V24 status (input) signals. + * + * Arguments: info pointer to device instance data + * Return Value: None + */ +void usc_get_serial_signals( struct mgsl_struct *info ) +{ + u16 status; + + /* clear all serial signals except DTR and RTS */ + info->serial_signals &= SerialSignal_DTR + SerialSignal_RTS; + + /* Read the Misc Interrupt status Register (MISR) to get */ + /* the V24 status signals. */ + + status = usc_InReg( info, MISR ); + + /* set serial signal bits to reflect MISR */ + + if ( status & MISCSTATUS_CTS ) + info->serial_signals |= SerialSignal_CTS; + + if ( status & MISCSTATUS_DCD ) + info->serial_signals |= SerialSignal_DCD; + + if ( status & MISCSTATUS_RI ) + info->serial_signals |= SerialSignal_RI; + + if ( status & MISCSTATUS_DSR ) + info->serial_signals |= SerialSignal_DSR; + +} /* end of usc_get_serial_signals() */ + +/* usc_set_serial_signals() + * + * Set the state of DTR and RTS based on contents of + * serial_signals member of device extension. + * + * Arguments: info pointer to device instance data + * Return Value: None + */ +void usc_set_serial_signals( struct mgsl_struct *info ) +{ + u16 Control; + unsigned char V24Out = info->serial_signals; + + /* get the current value of the Port Control Register (PCR) */ + + Control = usc_InReg( info, PCR ); + + if ( V24Out & SerialSignal_RTS ) + Control &= ~(BIT6); + else + Control |= BIT6; + + if ( V24Out & SerialSignal_DTR ) + Control &= ~(BIT4); + else + Control |= BIT4; + + usc_OutReg( info, PCR, Control ); + +} /* end of usc_set_serial_signals() */ + +/* usc_enable_async_clock() + * + * Enable the async clock at the specified frequency. + * + * Arguments: info pointer to device instance data + * data_rate data rate of clock in bps + * 0 disables the AUX clock. + * Return Value: None + */ +void usc_enable_async_clock( struct mgsl_struct *info, u32 data_rate ) +{ + if ( data_rate ) { + /* + * Clock mode Control Register (CMCR) + * + * <15..14> 00 counter 1 Disabled + * <13..12> 00 counter 0 Disabled + * <11..10> 11 BRG1 Input is TxC Pin + * <9..8> 11 BRG0 Input is TxC Pin + * <7..6> 01 DPLL Input is BRG1 Output + * <5..3> 100 TxCLK comes from BRG0 + * <2..0> 100 RxCLK comes from BRG0 + * + * 0000 1111 0110 0100 = 0x0f64 + */ + + usc_OutReg( info, CMCR, 0x0f64 ); + + + /* + * Write 16-bit Time Constant for BRG0 + * Time Constant = (ClkSpeed / data_rate) - 1 + * ClkSpeed = 921600 (ISA), 691200 (PCI) + */ + + if ( info->bus_type == MGSL_BUS_TYPE_PCI ) + usc_OutReg( info, TC0R, (u16)((691200/data_rate) - 1) ); + else + usc_OutReg( info, TC0R, (u16)((921600/data_rate) - 1) ); + + + /* + * Hardware Configuration Register (HCR) + * Clear Bit 1, BRG0 mode = Continuous + * Set Bit 0 to enable BRG0. + */ + + usc_OutReg( info, HCR, + (u16)((usc_InReg( info, HCR ) & ~BIT1) | BIT0) ); + + + /* Input/Output Control Reg, <2..0> = 100, Drive RxC pin with BRG0 */ + + usc_OutReg( info, IOCR, + (u16)((usc_InReg(info, IOCR) & 0xfff8) | 0x0004) ); + } else { + /* data rate == 0 so turn off BRG0 */ + usc_OutReg( info, HCR, (u16)(usc_InReg( info, HCR ) & ~BIT0) ); + } + +} /* end of usc_enable_async_clock() */ + +/* + * Buffer Structures: + * + * Normal memory access uses virtual addresses that can make discontiguous + * physical memory pages appear to be contiguous in the virtual address + * space (the processors memory mapping handles the conversions). + * + * DMA transfers require physically contiguous memory. This is because + * the DMA system controller and DMA bus masters deal with memory using + * only physical addresses. + * + * This causes a problem under Windows NT when large DMA buffers are + * needed. Fragmentation of the nonpaged pool prevents allocations of + * physically contiguous buffers larger than the PAGE_SIZE. + * + * However the 16C32 supports Bus Master Scatter/Gather DMA which + * allows DMA transfers to physically discontiguous buffers. Information + * about each data transfer buffer is contained in a memory structure + * called a 'buffer entry'. A list of buffer entries is maintained + * to track and control the use of the data transfer buffers. + * + * To support this strategy we will allocate sufficient PAGE_SIZE + * contiguous memory buffers to allow for the total required buffer + * space. + * + * The 16C32 accesses the list of buffer entries using Bus Master + * DMA. Control information is read from the buffer entries by the + * 16C32 to control data transfers. status information is written to + * the buffer entries by the 16C32 to indicate the status of completed + * transfers. + * + * The CPU writes control information to the buffer entries to control + * the 16C32 and reads status information from the buffer entries to + * determine information about received and transmitted frames. + * + * Because the CPU and 16C32 (adapter) both need simultaneous access + * to the buffer entries, the buffer entry memory is allocated with + * HalAllocateCommonBuffer(). This restricts the size of the buffer + * entry list to PAGE_SIZE. + * + * The actual data buffers on the other hand will only be accessed + * by the CPU or the adapter but not by both simultaneously. This allows + * Scatter/Gather packet based DMA procedures for using physically + * discontiguous pages. + */ + +/* + * mgsl_reset_tx_dma_buffers() + * + * Set the count for all transmit buffers to 0 to indicate the + * buffer is available for use and set the current buffer to the + * first buffer. This effectively makes all buffers free and + * discards any data in buffers. + * + * Arguments: info pointer to device instance data + * Return Value: None + */ +void mgsl_reset_tx_dma_buffers( struct mgsl_struct *info ) +{ + unsigned int i; + + for ( i = 0; i < info->tx_buffer_count; i++ ) { + *((unsigned long *)&(info->tx_buffer_list[i].count)) = 0; + } + + info->current_tx_buffer = 0; + info->start_tx_dma_buffer = 0; + info->tx_dma_buffers_used = 0; + + info->get_tx_holding_index = 0; + info->put_tx_holding_index = 0; + info->tx_holding_count = 0; + +} /* end of mgsl_reset_tx_dma_buffers() */ + +/* + * num_free_tx_dma_buffers() + * + * returns the number of free tx dma buffers available + * + * Arguments: info pointer to device instance data + * Return Value: number of free tx dma buffers + */ +int num_free_tx_dma_buffers(struct mgsl_struct *info) +{ + return info->tx_buffer_count - info->tx_dma_buffers_used; +} + +/* + * mgsl_reset_rx_dma_buffers() + * + * Set the count for all receive buffers to DMABUFFERSIZE + * and set the current buffer to the first buffer. This effectively + * makes all buffers free and discards any data in buffers. + * + * Arguments: info pointer to device instance data + * Return Value: None + */ +void mgsl_reset_rx_dma_buffers( struct mgsl_struct *info ) +{ + unsigned int i; + + for ( i = 0; i < info->rx_buffer_count; i++ ) { + *((unsigned long *)&(info->rx_buffer_list[i].count)) = DMABUFFERSIZE; +// info->rx_buffer_list[i].count = DMABUFFERSIZE; +// info->rx_buffer_list[i].status = 0; + } + + info->current_rx_buffer = 0; + +} /* end of mgsl_reset_rx_dma_buffers() */ + +/* + * mgsl_free_rx_frame_buffers() + * + * Free the receive buffers used by a received SDLC + * frame such that the buffers can be reused. + * + * Arguments: + * + * info pointer to device instance data + * StartIndex index of 1st receive buffer of frame + * EndIndex index of last receive buffer of frame + * + * Return Value: None + */ +void mgsl_free_rx_frame_buffers( struct mgsl_struct *info, unsigned int StartIndex, unsigned int EndIndex ) +{ + int Done = 0; + DMABUFFERENTRY *pBufEntry; + unsigned int Index; + + /* Starting with 1st buffer entry of the frame clear the status */ + /* field and set the count field to DMA Buffer Size. */ + + Index = StartIndex; + + while( !Done ) { + pBufEntry = &(info->rx_buffer_list[Index]); + + if ( Index == EndIndex ) { + /* This is the last buffer of the frame! */ + Done = 1; + } + + /* reset current buffer for reuse */ +// pBufEntry->status = 0; +// pBufEntry->count = DMABUFFERSIZE; + *((unsigned long *)&(pBufEntry->count)) = DMABUFFERSIZE; + + /* advance to next buffer entry in linked list */ + Index++; + if ( Index == info->rx_buffer_count ) + Index = 0; + } + + /* set current buffer to next buffer after last buffer of frame */ + info->current_rx_buffer = Index; + +} /* end of free_rx_frame_buffers() */ + +/* mgsl_get_rx_frame() + * + * This function attempts to return a received SDLC frame from the + * receive DMA buffers. Only frames received without errors are returned. + * + * Arguments: info pointer to device extension + * Return Value: 1 if frame returned, otherwise 0 + */ +int mgsl_get_rx_frame(struct mgsl_struct *info) +{ + unsigned int StartIndex, EndIndex; /* index of 1st and last buffers of Rx frame */ + unsigned short status; + DMABUFFERENTRY *pBufEntry; + unsigned int framesize = 0; + int ReturnCode = 0; + unsigned long flags; + struct tty_struct *tty = info->tty; + int return_frame = 0; + + /* + * current_rx_buffer points to the 1st buffer of the next available + * receive frame. To find the last buffer of the frame look for + * a non-zero status field in the buffer entries. (The status + * field is set by the 16C32 after completing a receive frame. + */ + + StartIndex = EndIndex = info->current_rx_buffer; + + while( !info->rx_buffer_list[EndIndex].status ) { + /* + * If the count field of the buffer entry is non-zero then + * this buffer has not been used. (The 16C32 clears the count + * field when it starts using the buffer.) If an unused buffer + * is encountered then there are no frames available. + */ + + if ( info->rx_buffer_list[EndIndex].count ) + goto Cleanup; + + /* advance to next buffer entry in linked list */ + EndIndex++; + if ( EndIndex == info->rx_buffer_count ) + EndIndex = 0; + + /* if entire list searched then no frame available */ + if ( EndIndex == StartIndex ) { + /* If this occurs then something bad happened, + * all buffers have been 'used' but none mark + * the end of a frame. Reset buffers and receiver. + */ + + if ( info->rx_enabled ){ + spin_lock_irqsave(&info->irq_spinlock,flags); + usc_start_receiver(info); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + } + goto Cleanup; + } + } + + + /* check status of receive frame */ + + status = info->rx_buffer_list[EndIndex].status; + + if ( status & (RXSTATUS_SHORT_FRAME + RXSTATUS_OVERRUN + + RXSTATUS_CRC_ERROR + RXSTATUS_ABORT) ) { + if ( status & RXSTATUS_SHORT_FRAME ) + info->icount.rxshort++; + else if ( status & RXSTATUS_ABORT ) + info->icount.rxabort++; + else if ( status & RXSTATUS_OVERRUN ) + info->icount.rxover++; + else { + info->icount.rxcrc++; + if ( info->params.crc_type & HDLC_CRC_RETURN_EX ) + return_frame = 1; + } + framesize = 0; +#ifdef CONFIG_SYNCLINK_SYNCPPP + info->netstats.rx_errors++; + info->netstats.rx_frame_errors++; +#endif + } else + return_frame = 1; + + if ( return_frame ) { + /* receive frame has no errors, get frame size. + * The frame size is the starting value of the RCC (which was + * set to 0xffff) minus the ending value of the RCC (decremented + * once for each receive character) minus 2 for the 16-bit CRC. + */ + + framesize = RCLRVALUE - info->rx_buffer_list[EndIndex].rcc; + + /* adjust frame size for CRC if any */ + if ( info->params.crc_type == HDLC_CRC_16_CCITT ) + framesize -= 2; + else if ( info->params.crc_type == HDLC_CRC_32_CCITT ) + framesize -= 4; + } + + if ( debug_level >= DEBUG_LEVEL_BH ) + printk("%s(%d):mgsl_get_rx_frame(%s) status=%04X size=%d\n", + __FILE__,__LINE__,info->device_name,status,framesize); + + if ( debug_level >= DEBUG_LEVEL_DATA ) + mgsl_trace_block(info,info->rx_buffer_list[StartIndex].virt_addr, + MIN(framesize,DMABUFFERSIZE),0); + + if (framesize) { + if ( ( (info->params.crc_type & HDLC_CRC_RETURN_EX) && + ((framesize+1) > info->max_frame_size) ) || + (framesize > info->max_frame_size) ) + info->icount.rxlong++; + else { + /* copy dma buffer(s) to contiguous intermediate buffer */ + int copy_count = framesize; + int index = StartIndex; + unsigned char *ptmp = info->intermediate_rxbuffer; + + if ( !(status & RXSTATUS_CRC_ERROR)) + info->icount.rxok++; + + while(copy_count) { + int partial_count; + if ( copy_count > DMABUFFERSIZE ) + partial_count = DMABUFFERSIZE; + else + partial_count = copy_count; + + pBufEntry = &(info->rx_buffer_list[index]); + memcpy( ptmp, pBufEntry->virt_addr, partial_count ); + ptmp += partial_count; + copy_count -= partial_count; + + if ( ++index == info->rx_buffer_count ) + index = 0; + } + + if ( info->params.crc_type & HDLC_CRC_RETURN_EX ) { + ++framesize; + *ptmp = (status & RXSTATUS_CRC_ERROR ? + RX_CRC_ERROR : + RX_OK); + + if ( debug_level >= DEBUG_LEVEL_DATA ) + printk("%s(%d):mgsl_get_rx_frame(%s) rx frame status=%d\n", + __FILE__,__LINE__,info->device_name, + *ptmp); + } + +#ifdef CONFIG_SYNCLINK_SYNCPPP + if (info->netcount) { + /* pass frame to syncppp device */ + mgsl_sppp_rx_done(info,info->intermediate_rxbuffer,framesize); + } + else +#endif + { + /* Call the line discipline receive callback directly. */ + if ( tty && tty->ldisc.receive_buf ) + tty->ldisc.receive_buf(tty, info->intermediate_rxbuffer, info->flag_buf, framesize); + } + } + } + /* Free the buffers used by this frame. */ + mgsl_free_rx_frame_buffers( info, StartIndex, EndIndex ); + + ReturnCode = 1; + +Cleanup: + + if ( info->rx_enabled && info->rx_overflow ) { + /* The receiver needs to restarted because of + * a receive overflow (buffer or FIFO). If the + * receive buffers are now empty, then restart receiver. + */ + + if ( !info->rx_buffer_list[EndIndex].status && + info->rx_buffer_list[EndIndex].count ) { + spin_lock_irqsave(&info->irq_spinlock,flags); + usc_start_receiver(info); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + } + } + + return ReturnCode; + +} /* end of mgsl_get_rx_frame() */ + +/* mgsl_get_raw_rx_frame() + * + * This function attempts to return a received frame from the + * receive DMA buffers when running in external loop mode. In this mode, + * we will return at most one DMABUFFERSIZE frame to the application. + * The USC receiver is triggering off of DCD going active to start a new + * frame, and DCD going inactive to terminate the frame (similar to + * processing a closing flag character). + * + * In this routine, we will return DMABUFFERSIZE "chunks" at a time. + * If DCD goes inactive, the last Rx DMA Buffer will have a non-zero + * status field and the RCC field will indicate the length of the + * entire received frame. We take this RCC field and get the modulus + * of RCC and DMABUFFERSIZE to determine if number of bytes in the + * last Rx DMA buffer and return that last portion of the frame. + * + * Arguments: info pointer to device extension + * Return Value: 1 if frame returned, otherwise 0 + */ +int mgsl_get_raw_rx_frame(struct mgsl_struct *info) +{ + unsigned int CurrentIndex, NextIndex; + unsigned short status; + DMABUFFERENTRY *pBufEntry; + unsigned int framesize = 0; + int ReturnCode = 0; + unsigned long flags; + struct tty_struct *tty = info->tty; + + /* + * current_rx_buffer points to the 1st buffer of the next available + * receive frame. The status field is set by the 16C32 after + * completing a receive frame. If the status field of this buffer + * is zero, either the USC is still filling this buffer or this + * is one of a series of buffers making up a received frame. + * + * If the count field of this buffer is zero, the USC is either + * using this buffer or has used this buffer. Look at the count + * field of the next buffer. If that next buffer's count is + * non-zero, the USC is still actively using the current buffer. + * Otherwise, if the next buffer's count field is zero, the + * current buffer is complete and the USC is using the next + * buffer. + */ + CurrentIndex = NextIndex = info->current_rx_buffer; + ++NextIndex; + if ( NextIndex == info->rx_buffer_count ) + NextIndex = 0; + + if ( info->rx_buffer_list[CurrentIndex].status != 0 || + (info->rx_buffer_list[CurrentIndex].count == 0 && + info->rx_buffer_list[NextIndex].count == 0)) { + /* + * Either the status field of this dma buffer is non-zero + * (indicating the last buffer of a receive frame) or the next + * buffer is marked as in use -- implying this buffer is complete + * and an intermediate buffer for this received frame. + */ + + status = info->rx_buffer_list[CurrentIndex].status; + + if ( status & (RXSTATUS_SHORT_FRAME + RXSTATUS_OVERRUN + + RXSTATUS_CRC_ERROR + RXSTATUS_ABORT) ) { + if ( status & RXSTATUS_SHORT_FRAME ) + info->icount.rxshort++; + else if ( status & RXSTATUS_ABORT ) + info->icount.rxabort++; + else if ( status & RXSTATUS_OVERRUN ) + info->icount.rxover++; + else + info->icount.rxcrc++; + framesize = 0; + } else { + /* + * A receive frame is available, get frame size and status. + * + * The frame size is the starting value of the RCC (which was + * set to 0xffff) minus the ending value of the RCC (decremented + * once for each receive character) minus 2 or 4 for the 16-bit + * or 32-bit CRC. + * + * If the status field is zero, this is an intermediate buffer. + * It's size is 4K. + * + * If the DMA Buffer Entry's Status field is non-zero, the + * receive operation completed normally (ie: DCD dropped). The + * RCC field is valid and holds the received frame size. + * It is possible that the RCC field will be zero on a DMA buffer + * entry with a non-zero status. This can occur if the total + * frame size (number of bytes between the time DCD goes active + * to the time DCD goes inactive) exceeds 65535 bytes. In this + * case the 16C32 has underrun on the RCC count and appears to + * stop updating this counter to let us know the actual received + * frame size. If this happens (non-zero status and zero RCC), + * simply return the entire RxDMA Buffer + */ + if ( status ) { + /* + * In the event that the final RxDMA Buffer is + * terminated with a non-zero status and the RCC + * field is zero, we interpret this as the RCC + * having underflowed (received frame > 65535 bytes). + * + * Signal the event to the user by passing back + * a status of RxStatus_CrcError returning the full + * buffer and let the app figure out what data is + * actually valid + */ + if ( info->rx_buffer_list[CurrentIndex].rcc ) + framesize = RCLRVALUE - info->rx_buffer_list[CurrentIndex].rcc; + else + framesize = DMABUFFERSIZE; + } + else + framesize = DMABUFFERSIZE; + } + + if ( framesize > DMABUFFERSIZE ) { + /* + * if running in raw sync mode, ISR handler for + * End Of Buffer events terminates all buffers at 4K. + * If this frame size is said to be >4K, get the + * actual number of bytes of the frame in this buffer. + */ + framesize = framesize % DMABUFFERSIZE; + } + + + if ( debug_level >= DEBUG_LEVEL_BH ) + printk("%s(%d):mgsl_get_raw_rx_frame(%s) status=%04X size=%d\n", + __FILE__,__LINE__,info->device_name,status,framesize); + + if ( debug_level >= DEBUG_LEVEL_DATA ) + mgsl_trace_block(info,info->rx_buffer_list[CurrentIndex].virt_addr, + MIN(framesize,DMABUFFERSIZE),0); + + if (framesize) { + /* copy dma buffer(s) to contiguous intermediate buffer */ + /* NOTE: we never copy more than DMABUFFERSIZE bytes */ + + pBufEntry = &(info->rx_buffer_list[CurrentIndex]); + memcpy( info->intermediate_rxbuffer, pBufEntry->virt_addr, framesize); + info->icount.rxok++; + + /* Call the line discipline receive callback directly. */ + if ( tty && tty->ldisc.receive_buf ) + tty->ldisc.receive_buf(tty, info->intermediate_rxbuffer, info->flag_buf, framesize); + } + + /* Free the buffers used by this frame. */ + mgsl_free_rx_frame_buffers( info, CurrentIndex, CurrentIndex ); + + ReturnCode = 1; + } + + + if ( info->rx_enabled && info->rx_overflow ) { + /* The receiver needs to restarted because of + * a receive overflow (buffer or FIFO). If the + * receive buffers are now empty, then restart receiver. + */ + + if ( !info->rx_buffer_list[CurrentIndex].status && + info->rx_buffer_list[CurrentIndex].count ) { + spin_lock_irqsave(&info->irq_spinlock,flags); + usc_start_receiver(info); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + } + } + + return ReturnCode; + +} /* end of mgsl_get_raw_rx_frame() */ + +/* mgsl_load_tx_dma_buffer() + * + * Load the transmit DMA buffer with the specified data. + * + * Arguments: + * + * info pointer to device extension + * Buffer pointer to buffer containing frame to load + * BufferSize size in bytes of frame in Buffer + * + * Return Value: None + */ +void mgsl_load_tx_dma_buffer(struct mgsl_struct *info, const char *Buffer, + unsigned int BufferSize) +{ + unsigned short Copycount; + unsigned int i = 0; + DMABUFFERENTRY *pBufEntry; + + if ( debug_level >= DEBUG_LEVEL_DATA ) + mgsl_trace_block(info,Buffer, MIN(BufferSize,DMABUFFERSIZE), 1); + + if (info->params.flags & HDLC_FLAG_HDLC_LOOPMODE) { + /* set CMR:13 to start transmit when + * next GoAhead (abort) is received + */ + info->cmr_value |= BIT13; + } + + /* begin loading the frame in the next available tx dma + * buffer, remember it's starting location for setting + * up tx dma operation + */ + i = info->current_tx_buffer; + info->start_tx_dma_buffer = i; + + /* Setup the status and RCC (Frame Size) fields of the 1st */ + /* buffer entry in the transmit DMA buffer list. */ + + info->tx_buffer_list[i].status = info->cmr_value & 0xf000; + info->tx_buffer_list[i].rcc = BufferSize; + info->tx_buffer_list[i].count = BufferSize; + + /* Copy frame data from 1st source buffer to the DMA buffers. */ + /* The frame data may span multiple DMA buffers. */ + + while( BufferSize ){ + /* Get a pointer to next DMA buffer entry. */ + pBufEntry = &info->tx_buffer_list[i++]; + + if ( i == info->tx_buffer_count ) + i=0; + + /* Calculate the number of bytes that can be copied from */ + /* the source buffer to this DMA buffer. */ + if ( BufferSize > DMABUFFERSIZE ) + Copycount = DMABUFFERSIZE; + else + Copycount = BufferSize; + + /* Actually copy data from source buffer to DMA buffer. */ + /* Also set the data count for this individual DMA buffer. */ + if ( info->bus_type == MGSL_BUS_TYPE_PCI ) + mgsl_load_pci_memory(pBufEntry->virt_addr, Buffer,Copycount); + else + memcpy(pBufEntry->virt_addr, Buffer, Copycount); + + pBufEntry->count = Copycount; + + /* Advance source pointer and reduce remaining data count. */ + Buffer += Copycount; + BufferSize -= Copycount; + + ++info->tx_dma_buffers_used; + } + + /* remember next available tx dma buffer */ + info->current_tx_buffer = i; + +} /* end of mgsl_load_tx_dma_buffer() */ + +/* + * mgsl_register_test() + * + * Performs a register test of the 16C32. + * + * Arguments: info pointer to device instance data + * Return Value: TRUE if test passed, otherwise FALSE + */ +BOOLEAN mgsl_register_test( struct mgsl_struct *info ) +{ + static unsigned short BitPatterns[] = + { 0x0000, 0xffff, 0xaaaa, 0x5555, 0x1234, 0x6969, 0x9696, 0x0f0f }; + static unsigned int Patterncount = sizeof(BitPatterns)/sizeof(unsigned short); + unsigned int i; + BOOLEAN rc = TRUE; + unsigned long flags; + + spin_lock_irqsave(&info->irq_spinlock,flags); + usc_reset(info); + + /* Verify the reset state of some registers. */ + + if ( (usc_InReg( info, SICR ) != 0) || + (usc_InReg( info, IVR ) != 0) || + (usc_InDmaReg( info, DIVR ) != 0) ){ + rc = FALSE; + } + + if ( rc == TRUE ){ + /* Write bit patterns to various registers but do it out of */ + /* sync, then read back and verify values. */ + + for ( i = 0 ; i < Patterncount ; i++ ) { + usc_OutReg( info, TC0R, BitPatterns[i] ); + usc_OutReg( info, TC1R, BitPatterns[(i+1)%Patterncount] ); + usc_OutReg( info, TCLR, BitPatterns[(i+2)%Patterncount] ); + usc_OutReg( info, RCLR, BitPatterns[(i+3)%Patterncount] ); + usc_OutReg( info, RSR, BitPatterns[(i+4)%Patterncount] ); + usc_OutDmaReg( info, TBCR, BitPatterns[(i+5)%Patterncount] ); + + if ( (usc_InReg( info, TC0R ) != BitPatterns[i]) || + (usc_InReg( info, TC1R ) != BitPatterns[(i+1)%Patterncount]) || + (usc_InReg( info, TCLR ) != BitPatterns[(i+2)%Patterncount]) || + (usc_InReg( info, RCLR ) != BitPatterns[(i+3)%Patterncount]) || + (usc_InReg( info, RSR ) != BitPatterns[(i+4)%Patterncount]) || + (usc_InDmaReg( info, TBCR ) != BitPatterns[(i+5)%Patterncount]) ){ + rc = FALSE; + break; + } + } + } + + usc_reset(info); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + return rc; + +} /* end of mgsl_register_test() */ + +/* mgsl_irq_test() Perform interrupt test of the 16C32. + * + * Arguments: info pointer to device instance data + * Return Value: TRUE if test passed, otherwise FALSE + */ +BOOLEAN mgsl_irq_test( struct mgsl_struct *info ) +{ + unsigned long EndTime; + unsigned long flags; + + spin_lock_irqsave(&info->irq_spinlock,flags); + usc_reset(info); + + /* + * Setup 16C32 to interrupt on TxC pin (14MHz clock) transition. + * The ISR sets irq_occurred to 1. + */ + + info->irq_occurred = FALSE; + + /* Enable INTEN gate for ISA adapter (Port 6, Bit12) */ + /* Enable INTEN (Port 6, Bit12) */ + /* This connects the IRQ request signal to the ISA bus */ + /* on the ISA adapter. This has no effect for the PCI adapter */ + usc_OutReg( info, PCR, (unsigned short)((usc_InReg(info, PCR) | BIT13) & ~BIT12) ); + + usc_EnableMasterIrqBit(info); + usc_EnableInterrupts(info, IO_PIN); + usc_ClearIrqPendingBits(info, IO_PIN); + + usc_UnlatchIostatusBits(info, MISCSTATUS_TXC_LATCHED); + usc_EnableStatusIrqs(info, SICR_TXC_ACTIVE + SICR_TXC_INACTIVE); + + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + EndTime=100; + while( EndTime-- && !info->irq_occurred ) { + set_current_state(TASK_INTERRUPTIBLE); + schedule_timeout(jiffies_from_ms(10)); + } + + spin_lock_irqsave(&info->irq_spinlock,flags); + usc_reset(info); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + if ( !info->irq_occurred ) + return FALSE; + else + return TRUE; + +} /* end of mgsl_irq_test() */ + +/* mgsl_dma_test() + * + * Perform a DMA test of the 16C32. A small frame is + * transmitted via DMA from a transmit buffer to a receive buffer + * using single buffer DMA mode. + * + * Arguments: info pointer to device instance data + * Return Value: TRUE if test passed, otherwise FALSE + */ +BOOLEAN mgsl_dma_test( struct mgsl_struct *info ) +{ + unsigned short FifoLevel; + unsigned long phys_addr; + unsigned int FrameSize; + unsigned int i; + char *TmpPtr; + BOOLEAN rc = TRUE; + unsigned short status=0; + unsigned long EndTime; + unsigned long flags; + MGSL_PARAMS tmp_params; + + /* save current port options */ + memcpy(&tmp_params,&info->params,sizeof(MGSL_PARAMS)); + /* load default port options */ + memcpy(&info->params,&default_params,sizeof(MGSL_PARAMS)); + +#define TESTFRAMESIZE 40 + + spin_lock_irqsave(&info->irq_spinlock,flags); + + /* setup 16C32 for SDLC DMA transfer mode */ + + usc_reset(info); + usc_set_sdlc_mode(info); + usc_enable_loopback(info,1); + + /* Reprogram the RDMR so that the 16C32 does NOT clear the count + * field of the buffer entry after fetching buffer address. This + * way we can detect a DMA failure for a DMA read (which should be + * non-destructive to system memory) before we try and write to + * memory (where a failure could corrupt system memory). + */ + + /* Receive DMA mode Register (RDMR) + * + * <15..14> 11 DMA mode = Linked List Buffer mode + * <13> 1 RSBinA/L = store Rx status Block in List entry + * <12> 0 1 = Clear count of List Entry after fetching + * <11..10> 00 Address mode = Increment + * <9> 1 Terminate Buffer on RxBound + * <8> 0 Bus Width = 16bits + * <7..0> ? status Bits (write as 0s) + * + * 1110 0010 0000 0000 = 0xe200 + */ + + usc_OutDmaReg( info, RDMR, 0xe200 ); + + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + + /* SETUP TRANSMIT AND RECEIVE DMA BUFFERS */ + + FrameSize = TESTFRAMESIZE; + + /* setup 1st transmit buffer entry: */ + /* with frame size and transmit control word */ + + info->tx_buffer_list[0].count = FrameSize; + info->tx_buffer_list[0].rcc = FrameSize; + info->tx_buffer_list[0].status = 0x4000; + + /* build a transmit frame in 1st transmit DMA buffer */ + + TmpPtr = info->tx_buffer_list[0].virt_addr; + for (i = 0; i < FrameSize; i++ ) + *TmpPtr++ = i; + + /* setup 1st receive buffer entry: */ + /* clear status, set max receive buffer size */ + + info->rx_buffer_list[0].status = 0; + info->rx_buffer_list[0].count = FrameSize + 4; + + /* zero out the 1st receive buffer */ + + memset( info->rx_buffer_list[0].virt_addr, 0, FrameSize + 4 ); + + /* Set count field of next buffer entries to prevent */ + /* 16C32 from using buffers after the 1st one. */ + + info->tx_buffer_list[1].count = 0; + info->rx_buffer_list[1].count = 0; + + + /***************************/ + /* Program 16C32 receiver. */ + /***************************/ + + spin_lock_irqsave(&info->irq_spinlock,flags); + + /* setup DMA transfers */ + usc_RTCmd( info, RTCmd_PurgeRxFifo ); + + /* program 16C32 receiver with physical address of 1st DMA buffer entry */ + phys_addr = info->rx_buffer_list[0].phys_entry; + usc_OutDmaReg( info, NRARL, (unsigned short)phys_addr ); + usc_OutDmaReg( info, NRARU, (unsigned short)(phys_addr >> 16) ); + + /* Clear the Rx DMA status bits (read RDMR) and start channel */ + usc_InDmaReg( info, RDMR ); + usc_DmaCmd( info, DmaCmd_InitRxChannel ); + + /* Enable Receiver (RMR <1..0> = 10) */ + usc_OutReg( info, RMR, (unsigned short)((usc_InReg(info, RMR) & 0xfffc) | 0x0002) ); + + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + + /*************************************************************/ + /* WAIT FOR RECEIVER TO DMA ALL PARAMETERS FROM BUFFER ENTRY */ + /*************************************************************/ + + /* Wait 100ms for interrupt. */ + EndTime = jiffies + jiffies_from_ms(100); + + for(;;) { + if ( jiffies > EndTime ) { + rc = FALSE; + break; + } + + spin_lock_irqsave(&info->irq_spinlock,flags); + status = usc_InDmaReg( info, RDMR ); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + if ( !(status & BIT4) && (status & BIT5) ) { + /* INITG (BIT 4) is inactive (no entry read in progress) AND */ + /* BUSY (BIT 5) is active (channel still active). */ + /* This means the buffer entry read has completed. */ + break; + } + } + + + /******************************/ + /* Program 16C32 transmitter. */ + /******************************/ + + spin_lock_irqsave(&info->irq_spinlock,flags); + + /* Program the Transmit Character Length Register (TCLR) */ + /* and clear FIFO (TCC is loaded with TCLR on FIFO clear) */ + + usc_OutReg( info, TCLR, (unsigned short)info->tx_buffer_list[0].count ); + usc_RTCmd( info, RTCmd_PurgeTxFifo ); + + /* Program the address of the 1st DMA Buffer Entry in linked list */ + + phys_addr = info->tx_buffer_list[0].phys_entry; + usc_OutDmaReg( info, NTARL, (unsigned short)phys_addr ); + usc_OutDmaReg( info, NTARU, (unsigned short)(phys_addr >> 16) ); + + /* unlatch Tx status bits, and start transmit channel. */ + + usc_OutReg( info, TCSR, (unsigned short)(( usc_InReg(info, TCSR) & 0x0f00) | 0xfa) ); + usc_DmaCmd( info, DmaCmd_InitTxChannel ); + + /* wait for DMA controller to fill transmit FIFO */ + + usc_TCmd( info, TCmd_SelectTicrTxFifostatus ); + + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + + /**********************************/ + /* WAIT FOR TRANSMIT FIFO TO FILL */ + /**********************************/ + + /* Wait 100ms */ + EndTime = jiffies + jiffies_from_ms(100); + + for(;;) { + if ( jiffies > EndTime ) { + rc = FALSE; + break; + } + + spin_lock_irqsave(&info->irq_spinlock,flags); + FifoLevel = usc_InReg(info, TICR) >> 8; + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + if ( FifoLevel < 16 ) + break; + else + if ( FrameSize < 32 ) { + /* This frame is smaller than the entire transmit FIFO */ + /* so wait for the entire frame to be loaded. */ + if ( FifoLevel <= (32 - FrameSize) ) + break; + } + } + + + if ( rc == TRUE ) + { + /* Enable 16C32 transmitter. */ + + spin_lock_irqsave(&info->irq_spinlock,flags); + + /* Transmit mode Register (TMR), <1..0> = 10, Enable Transmitter */ + usc_TCmd( info, TCmd_SendFrame ); + usc_OutReg( info, TMR, (unsigned short)((usc_InReg(info, TMR) & 0xfffc) | 0x0002) ); + + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + + /******************************/ + /* WAIT FOR TRANSMIT COMPLETE */ + /******************************/ + + /* Wait 100ms */ + EndTime = jiffies + jiffies_from_ms(100); + + /* While timer not expired wait for transmit complete */ + + spin_lock_irqsave(&info->irq_spinlock,flags); + status = usc_InReg( info, TCSR ); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + while ( !(status & (BIT6+BIT5+BIT4+BIT2+BIT1)) ) { + if ( jiffies > EndTime ) { + rc = FALSE; + break; + } + + spin_lock_irqsave(&info->irq_spinlock,flags); + status = usc_InReg( info, TCSR ); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + } + } + + + if ( rc == TRUE ){ + /* CHECK FOR TRANSMIT ERRORS */ + if ( status & (BIT5 + BIT1) ) + rc = FALSE; + } + + if ( rc == TRUE ) { + /* WAIT FOR RECEIVE COMPLETE */ + + /* Wait 100ms */ + EndTime = jiffies + jiffies_from_ms(100); + + /* Wait for 16C32 to write receive status to buffer entry. */ + status=info->rx_buffer_list[0].status; + while ( status == 0 ) { + if ( jiffies > EndTime ) { + printk(KERN_ERR"mark 4\n"); + rc = FALSE; + break; + } + status=info->rx_buffer_list[0].status; + } + } + + + if ( rc == TRUE ) { + /* CHECK FOR RECEIVE ERRORS */ + status = info->rx_buffer_list[0].status; + + if ( status & (BIT8 + BIT3 + BIT1) ) { + /* receive error has occured */ + rc = FALSE; + } else { + if ( memcmp( info->tx_buffer_list[0].virt_addr , + info->rx_buffer_list[0].virt_addr, FrameSize ) ){ + rc = FALSE; + } + } + } + + spin_lock_irqsave(&info->irq_spinlock,flags); + usc_reset( info ); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + /* restore current port options */ + memcpy(&info->params,&tmp_params,sizeof(MGSL_PARAMS)); + + return rc; + +} /* end of mgsl_dma_test() */ + +/* mgsl_adapter_test() + * + * Perform the register, IRQ, and DMA tests for the 16C32. + * + * Arguments: info pointer to device instance data + * Return Value: 0 if success, otherwise -ENODEV + */ +int mgsl_adapter_test( struct mgsl_struct *info ) +{ + if ( debug_level >= DEBUG_LEVEL_INFO ) + printk( "%s(%d):Testing device %s\n", + __FILE__,__LINE__,info->device_name ); + + if ( !mgsl_register_test( info ) ) { + info->init_error = DiagStatus_AddressFailure; + printk( "%s(%d):Register test failure for device %s Addr=%04X\n", + __FILE__,__LINE__,info->device_name, (unsigned short)(info->io_base) ); + return -ENODEV; + } + + if ( !mgsl_irq_test( info ) ) { + info->init_error = DiagStatus_IrqFailure; + printk( "%s(%d):Interrupt test failure for device %s IRQ=%d\n", + __FILE__,__LINE__,info->device_name, (unsigned short)(info->irq_level) ); + return -ENODEV; + } + + if ( !mgsl_dma_test( info ) ) { + info->init_error = DiagStatus_DmaFailure; + printk( "%s(%d):DMA test failure for device %s DMA=%d\n", + __FILE__,__LINE__,info->device_name, (unsigned short)(info->dma_level) ); + return -ENODEV; + } + + if ( debug_level >= DEBUG_LEVEL_INFO ) + printk( "%s(%d):device %s passed diagnostics\n", + __FILE__,__LINE__,info->device_name ); + + return 0; + +} /* end of mgsl_adapter_test() */ + +/* mgsl_memory_test() + * + * Test the shared memory on a PCI adapter. + * + * Arguments: info pointer to device instance data + * Return Value: TRUE if test passed, otherwise FALSE + */ +BOOLEAN mgsl_memory_test( struct mgsl_struct *info ) +{ + static unsigned long BitPatterns[] = { 0x0, 0x55555555, 0xaaaaaaaa, + 0x66666666, 0x99999999, 0xffffffff, 0x12345678 }; + unsigned long Patterncount = sizeof(BitPatterns)/sizeof(unsigned long); + unsigned long i; + unsigned long TestLimit = SHARED_MEM_ADDRESS_SIZE/sizeof(unsigned long); + unsigned long * TestAddr; + + if ( info->bus_type != MGSL_BUS_TYPE_PCI ) + return TRUE; + + TestAddr = (unsigned long *)info->memory_base; + + /* Test data lines with test pattern at one location. */ + + for ( i = 0 ; i < Patterncount ; i++ ) { + *TestAddr = BitPatterns[i]; + if ( *TestAddr != BitPatterns[i] ) + return FALSE; + } + + /* Test address lines with incrementing pattern over */ + /* entire address range. */ + + for ( i = 0 ; i < TestLimit ; i++ ) { + *TestAddr = i * 4; + TestAddr++; + } + + TestAddr = (unsigned long *)info->memory_base; + + for ( i = 0 ; i < TestLimit ; i++ ) { + if ( *TestAddr != i * 4 ) + return FALSE; + TestAddr++; + } + + memset( info->memory_base, 0, SHARED_MEM_ADDRESS_SIZE ); + + return TRUE; + +} /* End Of mgsl_memory_test() */ + + +/* mgsl_load_pci_memory() + * + * Load a large block of data into the PCI shared memory. + * Use this instead of memcpy() or memmove() to move data + * into the PCI shared memory. + * + * Notes: + * + * This function prevents the PCI9050 interface chip from hogging + * the adapter local bus, which can starve the 16C32 by preventing + * 16C32 bus master cycles. + * + * The PCI9050 documentation says that the 9050 will always release + * control of the local bus after completing the current read + * or write operation. + * + * It appears that as long as the PCI9050 write FIFO is full, the + * PCI9050 treats all of the writes as a single burst transaction + * and will not release the bus. This causes DMA latency problems + * at high speeds when copying large data blocks to the shared + * memory. + * + * This function in effect, breaks the a large shared memory write + * into multiple transations by interleaving a shared memory read + * which will flush the write FIFO and 'complete' the write + * transation. This allows any pending DMA request to gain control + * of the local bus in a timely fasion. + * + * Arguments: + * + * TargetPtr pointer to target address in PCI shared memory + * SourcePtr pointer to source buffer for data + * count count in bytes of data to copy + * + * Return Value: None + */ +void mgsl_load_pci_memory( char* TargetPtr, const char* SourcePtr, + unsigned short count ) +{ + /* 16 32-bit writes @ 60ns each = 960ns max latency on local bus */ +#define PCI_LOAD_INTERVAL 64 + + unsigned short Intervalcount = count / PCI_LOAD_INTERVAL; + unsigned short Index; + unsigned long Dummy; + + for ( Index = 0 ; Index < Intervalcount ; Index++ ) + { + memcpy(TargetPtr, SourcePtr, PCI_LOAD_INTERVAL); + Dummy = *((volatile unsigned long *)TargetPtr); + TargetPtr += PCI_LOAD_INTERVAL; + SourcePtr += PCI_LOAD_INTERVAL; + } + + memcpy( TargetPtr, SourcePtr, count % PCI_LOAD_INTERVAL ); + +} /* End Of mgsl_load_pci_memory() */ + +void mgsl_trace_block(struct mgsl_struct *info,const char* data, int count, int xmit) +{ + int i; + int linecount; + if (xmit) + printk("%s tx data:\n",info->device_name); + else + printk("%s rx data:\n",info->device_name); + + while(count) { + if (count > 16) + linecount = 16; + else + linecount = count; + + for(i=0;i=040 && data[i]<=0176) + printk("%c",data[i]); + else + printk("."); + } + printk("\n"); + + data += linecount; + count -= linecount; + } +} /* end of mgsl_trace_block() */ + +/* mgsl_tx_timeout() + * + * called when HDLC frame times out + * update stats and do tx completion processing + * + * Arguments: context pointer to device instance data + * Return Value: None + */ +void mgsl_tx_timeout(unsigned long context) +{ + struct mgsl_struct *info = (struct mgsl_struct*)context; + unsigned long flags; + + if ( debug_level >= DEBUG_LEVEL_INFO ) + printk( "%s(%d):mgsl_tx_timeout(%s)\n", + __FILE__,__LINE__,info->device_name); + if(info->tx_active && + (info->params.mode == MGSL_MODE_HDLC || + info->params.mode == MGSL_MODE_RAW) ) { + info->icount.txtimeout++; + } + spin_lock_irqsave(&info->irq_spinlock,flags); + info->tx_active = 0; + info->xmit_cnt = info->xmit_head = info->xmit_tail = 0; + + if ( info->params.flags & HDLC_FLAG_HDLC_LOOPMODE ) + usc_loopmode_cancel_transmit( info ); + + spin_unlock_irqrestore(&info->irq_spinlock,flags); + +#ifdef CONFIG_SYNCLINK_SYNCPPP + if (info->netcount) + mgsl_sppp_tx_done(info); + else +#endif + mgsl_bh_transmit(info); + +} /* end of mgsl_tx_timeout() */ + +/* signal that there are no more frames to send, so that + * line is 'released' by echoing RxD to TxD when current + * transmission is complete (or immediately if no tx in progress). + */ +static int mgsl_loopmode_send_done( struct mgsl_struct * info ) +{ + unsigned long flags; + + spin_lock_irqsave(&info->irq_spinlock,flags); + if (info->params.flags & HDLC_FLAG_HDLC_LOOPMODE) { + if (info->tx_active) + info->loopmode_send_done_requested = TRUE; + else + usc_loopmode_send_done(info); + } + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + return 0; +} + +/* release the line by echoing RxD to TxD + * upon completion of a transmit frame + */ +void usc_loopmode_send_done( struct mgsl_struct * info ) +{ + info->loopmode_send_done_requested = FALSE; + /* clear CMR:13 to 0 to start echoing RxData to TxData */ + info->cmr_value &= ~BIT13; + usc_OutReg(info, CMR, info->cmr_value); +} + +/* abort a transmit in progress while in HDLC LoopMode + */ +void usc_loopmode_cancel_transmit( struct mgsl_struct * info ) +{ + /* reset tx dma channel and purge TxFifo */ + usc_RTCmd( info, RTCmd_PurgeTxFifo ); + usc_DmaCmd( info, DmaCmd_ResetTxChannel ); + usc_loopmode_send_done( info ); +} + +/* for HDLC/SDLC LoopMode, setting CMR:13 after the transmitter is enabled + * is an Insert Into Loop action. Upon receipt of a GoAhead sequence (RxAbort) + * we must clear CMR:13 to begin repeating TxData to RxData + */ +void usc_loopmode_insert_request( struct mgsl_struct * info ) +{ + info->loopmode_insert_requested = TRUE; + + /* enable RxAbort irq. On next RxAbort, clear CMR:13 to + * begin repeating TxData on RxData (complete insertion) + */ + usc_OutReg( info, RICR, + (usc_InReg( info, RICR ) | RXSTATUS_ABORT_RECEIVED ) ); + + /* set CMR:13 to insert into loop on next GoAhead (RxAbort) */ + info->cmr_value |= BIT13; + usc_OutReg(info, CMR, info->cmr_value); +} + +/* return 1 if station is inserted into the loop, otherwise 0 + */ +int usc_loopmode_active( struct mgsl_struct * info) +{ + return usc_InReg( info, CCSR ) & BIT7 ? 1 : 0 ; +} + +/* return 1 if USC is in loop send mode, otherwise 0 + */ +int usc_loopmode_send_active( struct mgsl_struct * info ) +{ + return usc_InReg( info, CCSR ) & BIT6 ? 1 : 0 ; +} + +#ifdef CONFIG_SYNCLINK_SYNCPPP +/* syncppp net device routines + */ + +void mgsl_sppp_init(struct mgsl_struct *info) +{ + struct net_device *d; + + sprintf(info->netname,"mgsl%d",info->line); + + info->if_ptr = &info->pppdev; + info->netdev = info->pppdev.dev = &info->netdevice; + + sppp_attach(&info->pppdev); + + d = info->netdev; + strcpy(d->name,info->netname); + d->base_addr = info->io_base; + d->irq = info->irq_level; + d->dma = info->dma_level; + d->priv = info; + d->init = NULL; + d->open = mgsl_sppp_open; + d->stop = mgsl_sppp_close; + d->hard_start_xmit = mgsl_sppp_tx; + d->do_ioctl = mgsl_sppp_ioctl; + d->get_stats = mgsl_net_stats; + d->tx_timeout = mgsl_sppp_tx_timeout; + d->watchdog_timeo = 10*HZ; + +#if LINUX_VERSION_CODE < VERSION(2,4,4) + dev_init_buffers(d); +#endif + + if (register_netdev(d) == -1) { + printk(KERN_WARNING "%s: register_netdev failed.\n", d->name); + sppp_detach(info->netdev); + return; + } + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("mgsl_sppp_init()\n"); +} + +void mgsl_sppp_delete(struct mgsl_struct *info) +{ + if (debug_level >= DEBUG_LEVEL_INFO) + printk("mgsl_sppp_delete(%s)\n",info->netname); + sppp_detach(info->netdev); + unregister_netdev(info->netdev); +} + +int mgsl_sppp_open(struct net_device *d) +{ + struct mgsl_struct *info = d->priv; + int err, flags; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("mgsl_sppp_open(%s)\n",info->netname); + + spin_lock_irqsave(&info->netlock, flags); + if (info->count != 0 || info->netcount != 0) { + printk(KERN_WARNING "%s: sppp_open returning busy\n", info->netname); + spin_unlock_irqrestore(&info->netlock, flags); + return -EBUSY; + } + info->netcount=1; + MOD_INC_USE_COUNT; + spin_unlock_irqrestore(&info->netlock, flags); + + /* claim resources and init adapter */ + if ((err = startup(info)) != 0) + goto open_fail; + + /* allow syncppp module to do open processing */ + if ((err = sppp_open(d)) != 0) { + shutdown(info); + goto open_fail; + } + + info->serial_signals |= SerialSignal_RTS + SerialSignal_DTR; + mgsl_program_hw(info); + + d->trans_start = jiffies; + netif_start_queue(d); + return 0; + +open_fail: + spin_lock_irqsave(&info->netlock, flags); + info->netcount=0; + MOD_DEC_USE_COUNT; + spin_unlock_irqrestore(&info->netlock, flags); + return err; +} + +void mgsl_sppp_tx_timeout(struct net_device *dev) +{ + struct mgsl_struct *info = dev->priv; + int flags; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("mgsl_sppp_tx_timeout(%s)\n",info->netname); + + info->netstats.tx_errors++; + info->netstats.tx_aborted_errors++; + + spin_lock_irqsave(&info->irq_spinlock,flags); + usc_stop_transmitter(info); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + netif_wake_queue(dev); +} + +int mgsl_sppp_tx(struct sk_buff *skb, struct net_device *dev) +{ + struct mgsl_struct *info = dev->priv; + unsigned long flags; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("mgsl_sppp_tx(%s)\n",info->netname); + + netif_stop_queue(dev); + + info->xmit_cnt = skb->len; + mgsl_load_tx_dma_buffer(info, skb->data, skb->len); + info->netstats.tx_packets++; + info->netstats.tx_bytes += skb->len; + dev_kfree_skb(skb); + + dev->trans_start = jiffies; + + spin_lock_irqsave(&info->irq_spinlock,flags); + if (!info->tx_active) + usc_start_transmitter(info); + spin_unlock_irqrestore(&info->irq_spinlock,flags); + + return 0; +} + +int mgsl_sppp_close(struct net_device *d) +{ + struct mgsl_struct *info = d->priv; + unsigned long flags; + + if (debug_level >= DEBUG_LEVEL_INFO) + printk("mgsl_sppp_close(%s)\n",info->netname); + + /* shutdown adapter and release resources */ + shutdown(info); + + /* allow syncppp to do close processing */ + sppp_close(d); + netif_stop_queue(d); + + spin_lock_irqsave(&info->netlock, flags); + info->netcount=0; + MOD_DEC_USE_COUNT; + spin_unlock_irqrestore(&info->netlock, flags); + return 0; +} + +void mgsl_sppp_rx_done(struct mgsl_struct *info, char *buf, int size) +{ + struct sk_buff *skb = dev_alloc_skb(size); + if (debug_level >= DEBUG_LEVEL_INFO) + printk("mgsl_sppp_rx_done(%s)\n",info->netname); + if (skb == NULL) { + printk(KERN_NOTICE "%s: cant alloc skb, dropping packet\n", + info->netname); + info->netstats.rx_dropped++; + return; + } + + memcpy(skb_put(skb, size),buf,size); + + skb->protocol = htons(ETH_P_WAN_PPP); + skb->dev = info->netdev; + skb->mac.raw = skb->data; + info->netstats.rx_packets++; + info->netstats.rx_bytes += size; + netif_rx(skb); + info->netdev->trans_start = jiffies; +} + +void mgsl_sppp_tx_done(struct mgsl_struct *info) +{ + if (netif_queue_stopped(info->netdev)) + netif_wake_queue(info->netdev); +} + +struct net_device_stats *mgsl_net_stats(struct net_device *dev) +{ + struct mgsl_struct *info = dev->priv; + if (debug_level >= DEBUG_LEVEL_INFO) + printk("mgsl_net_stats(%s)\n",info->netname); + return &info->netstats; +} + +int mgsl_sppp_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) +{ + struct mgsl_struct *info = (struct mgsl_struct *)dev->priv; + if (debug_level >= DEBUG_LEVEL_INFO) + printk("%s(%d):mgsl_ioctl %s cmd=%08X\n", __FILE__,__LINE__, + info->netname, cmd ); + return sppp_do_ioctl(dev, ifr, cmd); +} + +#endif /* ifdef CONFIG_SYNCLINK_SYNCPPP */ + +static int __init synclink_init_one (struct pci_dev *dev, + const struct pci_device_id *ent) +{ + struct mgsl_struct *info; + + if (pci_enable_device(dev)) { + printk("error enabling pci device %p\n", dev); + return -EIO; + } + + if (!(info = mgsl_allocate_device())) { + printk("can't allocate device instance data.\n"); + return -EIO; + } + + /* Copy user configuration info to device instance data */ + + info->io_base = pci_resource_start(dev, 2); + info->irq_level = dev->irq; + info->phys_memory_base = pci_resource_start(dev, 3); + + /* Because veremap only works on page boundaries we must map + * a larger area than is actually implemented for the LCR + * memory range. We map a full page starting at the page boundary. + */ + info->phys_lcr_base = pci_resource_start(dev, 0); + info->lcr_offset = info->phys_lcr_base & (PAGE_SIZE-1); + info->phys_lcr_base &= ~(PAGE_SIZE-1); + + info->bus_type = MGSL_BUS_TYPE_PCI; + info->io_addr_size = 8; + info->irq_flags = SA_SHIRQ; + + /* Store the PCI9050 misc control register value because a flaw + * in the PCI9050 prevents LCR registers from being read if + * BIOS assigns an LCR base address with bit 7 set. + * + * Only the misc control register is accessed for which only + * write access is needed, so set an initial value and change + * bits to the device instance data as we write the value + * to the actual misc control register. + */ + info->misc_ctrl_value = 0x087e4546; + + mgsl_add_device(info); + + return 0; +} + +static void __exit synclink_remove_one (struct pci_dev *dev) +{ +} + diff -urpN linux-2.4.9-linus/drivers/char/vc_screen.c linux-2.4.9-larpage/drivers/char/vc_screen.c --- linux-2.4.9-linus/drivers/char/vc_screen.c 2000-10-16 12:58:51.000000000 -0700 +++ linux-2.4.9-larpage/drivers/char/vc_screen.c 2002-11-20 02:02:41.000000000 -0800 @@ -89,8 +89,8 @@ static loff_t vcs_lseek(struct file *fil * so that we can easily avoid touching user space while holding the * console spinlock. */ -extern char con_buf[PAGE_SIZE]; -#define CON_BUF_SIZE PAGE_SIZE +#define CON_BUF_SIZE MMUPAGE_SIZE +extern char con_buf[CON_BUF_SIZE]; extern struct semaphore con_buf_sem; static ssize_t diff -urpN linux-2.4.9-linus/drivers/ieee1394/pcilynx.h linux-2.4.9-larpage/drivers/ieee1394/pcilynx.h --- linux-2.4.9-linus/drivers/ieee1394/pcilynx.h 2001-08-12 12:39:02.000000000 -0700 +++ linux-2.4.9-larpage/drivers/ieee1394/pcilynx.h 2002-11-20 02:02:41.000000000 -0800 @@ -17,7 +17,7 @@ #define NUM_ISORCV_PCL 4 #define MAX_ISORCV_SIZE 2048 #define ISORCV_PER_PAGE (PAGE_SIZE / MAX_ISORCV_SIZE) -#define ISORCV_PAGES (NUM_ISORCV_PCL / ISORCV_PER_PAGE) +#define ISORCV_PAGES ((NUM_ISORCV_PCL + ISORCV_PER_PAGE - 1) / ISORCV_PER_PAGE) #define CHANNEL_LOCALBUS 0 #define CHANNEL_ASYNC_RCV 1 diff -urpN linux-2.4.9-linus/drivers/ieee1394/video1394.c linux-2.4.9-larpage/drivers/ieee1394/video1394.c --- linux-2.4.9-linus/drivers/ieee1394/video1394.c 2001-08-16 09:49:49.000000000 -0700 +++ linux-2.4.9-larpage/drivers/ieee1394/video1394.c 2002-11-20 02:02:43.000000000 -0800 @@ -152,127 +152,68 @@ static struct hpsb_highlevel *hl_handle static struct video_template video_tmpl = { irq_handler }; -/* Code taken from bttv.c */ - -/*******************************/ -/* Memory management functions */ -/*******************************/ - -#define MDEBUG(x) do { } while(0) /* Debug memory management */ - -/* [DaveM] I've recoded most of this so that: - * 1) It's easier to tell what is happening - * 2) It's more portable, especially for translating things - * out of vmalloc mapped areas in the kernel. - * 3) Less unnecessary translations happen. - * - * The code used to assume that the kernel vmalloc mappings - * existed in the page tables of every process, this is simply - * not guaranteed. We now use pgd_offset_k which is the - * defined way to get at the kernel page tables. - */ - -/* Given PGD from the address space's page table, return the kernel - * virtual mapping of the physical memory mapped at ADR. - */ -static inline unsigned long uvirt_to_kva(pgd_t *pgd, unsigned long adr) -{ - unsigned long ret = 0UL; - pmd_t *pmd; - pte_t *ptep, pte; - - if (!pgd_none(*pgd)) { - pmd = pmd_offset(pgd, adr); - if (!pmd_none(*pmd)) { - ptep = pte_offset(pmd, adr); - pte = *ptep; - if(pte_present(pte)) { - ret = (unsigned long) - page_address(pte_page(pte)); - ret |= (adr & (PAGE_SIZE - 1)); - } - } - } - MDEBUG(printk("uv2kva(%lx-->%lx)", adr, ret)); - return ret; -} - -static inline unsigned long uvirt_to_bus(unsigned long adr) -{ - unsigned long kva, ret; - - kva = uvirt_to_kva(pgd_offset(current->mm, adr), adr); - ret = virt_to_bus((void *)kva); - MDEBUG(printk("uv2b(%lx-->%lx)", adr, ret)); - return ret; -} - -static inline unsigned long kvirt_to_bus(unsigned long adr) -{ - unsigned long va, kva, ret; - - va = VMALLOC_VMADDR(adr); - kva = uvirt_to_kva(pgd_offset_k(va), va); - ret = virt_to_bus((void *)kva); - MDEBUG(printk("kv2b(%lx-->%lx)", adr, ret)); - return ret; +/**********************************************************/ +/* Memory management functions, copied from bttv-driver.c */ +/**********************************************************/ + +static void *rvmalloc(unsigned long size) +{ + void *mem; + + mem = vmalloc_32(size); + if (mem) { + /* no junk to the user */ + memset(mem, 0, PAGE_ALIGN(size)); + /* no need to reserve until rvmap_page_range */ + } + return mem; } -/* Here we want the physical address of the memory. - * This is used when initializing the contents of the - * area and marking the pages as reserved. - */ -static inline unsigned long kvirt_to_pa(unsigned long adr) +static void rvfree(void *mem, unsigned long size) { - unsigned long va, kva, ret; + unsigned long vadr; - va = VMALLOC_VMADDR(adr); - kva = uvirt_to_kva(pgd_offset_k(va), va); - ret = __pa(kva); - MDEBUG(printk("kv2pa(%lx-->%lx)", adr, ret)); - return ret; + if (mem) { + vadr = (unsigned long) mem; + while ((long) size > 0) { + ClearPageReserved(vvirt_to_page(vadr)); + vadr += PAGE_SIZE; + size -= PAGE_SIZE; + } + vfree(mem); + } } -static void * rvmalloc(unsigned long size) +static inline int rvmap_page_range(const char *uadr, void *mem, + unsigned long size, pgprot_t prot) { - void * mem; - unsigned long adr, page; - - mem=vmalloc_32(size); - if (mem) - { - memset(mem, 0, size); /* Clear the ram out, - no junk to the user */ - adr=(unsigned long) mem; - while (size > 0) - { - page = kvirt_to_pa(adr); - mem_map_reserve(virt_to_page(__va(page))); - adr+=PAGE_SIZE; - size-=PAGE_SIZE; - } + struct page *page; + unsigned long padr; + unsigned long unit = PAGE_SIZE; + + while ((long) size > 0) { + if (unit > size) + unit = size; + page = vvirt_to_page((unsigned long)mem); + SetPageReserved(page); + padr = __pa(page_address(page)); + if (remap_page_range((unsigned long)uadr, padr, unit, prot)) + return -EAGAIN; + uadr += PAGE_SIZE; + mem += PAGE_SIZE; + size -= PAGE_SIZE; } - return mem; + return 0; } -static void rvfree(void * mem, unsigned long size) +static inline unsigned long kvirt_to_bus(unsigned long vadr) { - unsigned long adr, page; - - if (mem) - { - adr=(unsigned long) mem; - while (size > 0) - { - page = kvirt_to_pa(adr); - mem_map_unreserve(virt_to_page(__va(page))); - adr+=PAGE_SIZE; - size-=PAGE_SIZE; - } - vfree(mem); - } + unsigned long kadr; + + kadr = (unsigned long) page_address(vvirt_to_page(vadr)) + + (vadr & ~PAGE_MASK); + return virt_to_bus((void *) kadr); } -/* End of code taken from bttv.c */ static int free_dma_iso_ctx(struct dma_iso_ctx **d) { @@ -336,10 +277,7 @@ alloc_dma_iso_ctx(struct ti_ohci *ohci, d->channel = channel; d->num_desc = num_desc; d->frame_size = buf_size; - if (buf_size%PAGE_SIZE) - d->buf_size = buf_size + PAGE_SIZE - (buf_size%PAGE_SIZE); - else - d->buf_size = buf_size; + d->buf_size = MMUPAGE_ALIGN(buf_size); d->last_buffer = -1; d->buf = NULL; d->ir_prg = NULL; @@ -371,9 +309,9 @@ alloc_dma_iso_ctx(struct ti_ohci *ohci, } memset(d->ir_prg, 0, d->num_desc * sizeof(struct dma_cmd *)); - d->nb_cmd = d->buf_size / PAGE_SIZE + 1; - d->left_size = (d->frame_size % PAGE_SIZE) ? - d->frame_size % PAGE_SIZE : PAGE_SIZE; + d->nb_cmd = d->buf_size / MMUPAGE_SIZE + 1; + d->left_size = (d->frame_size % MMUPAGE_SIZE) ? + d->frame_size % MMUPAGE_SIZE : MMUPAGE_SIZE; for (i=0;inum_desc;i++) { d->ir_prg[i] = kmalloc(d->nb_cmd * @@ -405,11 +343,11 @@ alloc_dma_iso_ctx(struct ti_ohci *ohci, d->packet_size = packet_size; - if (PAGE_SIZE % packet_size || packet_size>4096) { + if (MMUPAGE_SIZE % packet_size || packet_size>4096) { PRINT(KERN_ERR, ohci->id, "Packet size %d (page_size: %ld) " "not yet supported\n", - packet_size, PAGE_SIZE); + packet_size, MMUPAGE_SIZE); free_dma_iso_ctx(&d); return NULL; } @@ -475,9 +413,9 @@ static void reset_ir_status(struct dma_i { int i; d->ir_prg[n][0].status = 4; - d->ir_prg[n][1].status = PAGE_SIZE-4; + d->ir_prg[n][1].status = MMUPAGE_SIZE-4; for (i=2;inb_cmd-1;i++) - d->ir_prg[n][i].status = PAGE_SIZE; + d->ir_prg[n][i].status = MMUPAGE_SIZE; d->ir_prg[n][i].status = d->left_size; } @@ -498,15 +436,15 @@ static void initialize_dma_ir_prg(struct ir_prg[0].branchAddress = (virt_to_bus(&(ir_prg[1].control)) & 0xfffffff0) | 0x1; - /* the second descriptor will read PAGE_SIZE-4 bytes */ - ir_prg[1].control = (0x280C << 16) | (PAGE_SIZE-4); + /* the second descriptor will read MMUPAGE_SIZE-4 bytes */ + ir_prg[1].control = (0x280C << 16) | (MMUPAGE_SIZE-4); ir_prg[1].address = kvirt_to_bus(buf+4); ir_prg[1].branchAddress = (virt_to_bus(&(ir_prg[2].control)) & 0xfffffff0) | 0x1; for (i=2;inb_cmd-1;i++) { - ir_prg[i].control = (0x280C << 16) | PAGE_SIZE; - ir_prg[i].address = kvirt_to_bus(buf+(i-1)*PAGE_SIZE); + ir_prg[i].control = (0x280C << 16) | MMUPAGE_SIZE; + ir_prg[i].address = kvirt_to_bus(buf+(i-1)*MMUPAGE_SIZE); ir_prg[i].branchAddress = (virt_to_bus(&(ir_prg[i+1].control)) @@ -515,7 +453,7 @@ static void initialize_dma_ir_prg(struct /* the last descriptor will generate an interrupt */ ir_prg[i].control = (0x283C << 16) | d->left_size; - ir_prg[i].address = kvirt_to_bus(buf+(i-1)*PAGE_SIZE); + ir_prg[i].address = kvirt_to_bus(buf+(i-1)*MMUPAGE_SIZE); } static void initialize_dma_ir_ctx(struct dma_iso_ctx *d, int tag, int flags) @@ -804,9 +742,6 @@ static void initialize_dma_it_ctx(struct static int do_iso_mmap(struct ti_ohci *ohci, struct dma_iso_ctx *d, const char *adr, unsigned long size) { - unsigned long start=(unsigned long) adr; - unsigned long page,pos; - if (size>d->num_desc * d->buf_size) { PRINT(KERN_ERR, ohci->id, "iso context %d buf size is different from mmap size", @@ -818,17 +753,7 @@ static int do_iso_mmap(struct ti_ohci *o "iso context %d is not allocated", d->ctx); return -EINVAL; } - - pos=(unsigned long) d->buf; - while (size > 0) { - page = kvirt_to_pa(pos); - if (remap_page_range(start, page, PAGE_SIZE, PAGE_SHARED)) - return -EAGAIN; - start+=PAGE_SIZE; - pos+=PAGE_SIZE; - size-=PAGE_SIZE; - } - return 0; + return rvmap_page_range(adr, d->buf, size, PAGE_SHARED); } static int video1394_ioctl(struct inode *inode, struct file *file, diff -urpN linux-2.4.9-linus/drivers/ieee1394/video1394.c.orig linux-2.4.9-larpage/drivers/ieee1394/video1394.c.orig --- linux-2.4.9-linus/drivers/ieee1394/video1394.c.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/drivers/ieee1394/video1394.c.orig 2002-11-20 02:02:43.000000000 -0800 @@ -0,0 +1,1615 @@ +/* + * video1394.c - video driver for OHCI 1394 boards + * Copyright (C)1999,2000 Sebastien Rougeaux + * Peter Schlaile + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software Foundation, + * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include + +#include "ieee1394.h" +#include "ieee1394_types.h" +#include "hosts.h" +#include "ieee1394_core.h" +#include "highlevel.h" +#include "video1394.h" + +#include "ohci1394.h" + +#define VIDEO1394_MAJOR 172 +#define ISO_CHANNELS 64 +#define ISO_RECEIVE 0 +#define ISO_TRANSMIT 1 + +#ifndef virt_to_page +#define virt_to_page(x) MAP_NR(x) +#endif + +#ifndef vmalloc_32 +#define vmalloc_32(x) vmalloc(x) +#endif + +struct it_dma_prg { + struct dma_cmd begin; + quadlet_t data[4]; + struct dma_cmd end; + quadlet_t pad[4]; /* FIXME: quick hack for memory alignment */ +}; + +struct dma_iso_ctx { + struct ti_ohci *ohci; + int ctx; + int channel; + int last_buffer; + int * next_buffer; /* For ISO Transmit of video packets + to write the correct SYT field + into the next block */ + unsigned int num_desc; + unsigned int buf_size; + unsigned int frame_size; + unsigned int packet_size; + unsigned int left_size; + unsigned int nb_cmd; + unsigned char *buf; + struct dma_cmd **ir_prg; + struct it_dma_prg **it_prg; + unsigned int *buffer_status; + unsigned int *last_used_cmd; /* For ISO Transmit with + variable sized packets only ! */ + int ctrlClear; + int ctrlSet; + int cmdPtr; + int ctxMatch; + wait_queue_head_t waitq; + spinlock_t lock; + unsigned int syt_offset; + int flags; +}; + +struct video_card { + struct ti_ohci *ohci; + struct list_head list; + int id; + devfs_handle_t devfs; + + struct dma_iso_ctx **ir_context; + struct dma_iso_ctx **it_context; + struct dma_iso_ctx *current_ctx; +}; + +#ifdef CONFIG_IEEE1394_VERBOSEDEBUG +#define VIDEO1394_DEBUG +#endif + +#ifdef DBGMSG +#undef DBGMSG +#endif + +#ifdef VIDEO1394_DEBUG +#define DBGMSG(card, fmt, args...) \ +printk(KERN_INFO "video1394_%d: " fmt "\n" , card , ## args) +#else +#define DBGMSG(card, fmt, args...) +#endif + +/* print general (card independent) information */ +#define PRINT_G(level, fmt, args...) \ +printk(level "video1394: " fmt "\n" , ## args) + +/* print card specific information */ +#define PRINT(level, card, fmt, args...) \ +printk(level "video1394_%d: " fmt "\n" , card , ## args) + +static void irq_handler(int card, quadlet_t isoRecvIntEvent, + quadlet_t isoXmitIntEvent); + +static LIST_HEAD(video1394_cards); +static spinlock_t video1394_cards_lock = SPIN_LOCK_UNLOCKED; + +static devfs_handle_t devfs_handle; +static struct hpsb_highlevel *hl_handle = NULL; + +static struct video_template video_tmpl = { irq_handler }; + +/**********************************************************/ +/* Memory management functions, copied from bttv-driver.c */ +/**********************************************************/ + +static void *rvmalloc(unsigned long size) +{ + void *mem; + + mem = vmalloc_32(size); + if (mem) { + /* no junk to the user */ + memset(mem, 0, PAGE_ALIGN(size)); + /* no need to reserve until rvmap_page_range */ + } + return mem; +} + +static void rvfree(void *mem, unsigned long size) +{ + unsigned long vadr; + + if (mem) { + vadr = (unsigned long) mem; + while ((long) size > 0) { + ClearPageReserved(vvirt_to_page(vadr)); + vadr += PAGE_SIZE; + size -= PAGE_SIZE; + } + vfree(mem); + } +} + +static inline int rvmap_page_range(const char *uadr, void *mem, + unsigned long size, pgprot_t prot) +{ + struct page *page; + unsigned long padr; + unsigned long unit = PAGE_SIZE; + + while ((long) size > 0) { + if (unit > size) + unit = size; + page = vvirt_to_page((unsigned long)mem); + SetPageReserved(page); + padr = __pa(page_address(page)); + if (remap_page_range((unsigned long)uadr, padr, unit, prot)) + return -EAGAIN; + uadr += PAGE_SIZE; + mem += PAGE_SIZE; + size -= PAGE_SIZE; + } + return 0; +} + +static inline unsigned long kvirt_to_bus(unsigned long vadr) +{ + unsigned long kadr; + + kadr = (unsigned long) page_address(vvirt_to_page(vadr)) + + (vadr & ~PAGE_MASK); + return virt_to_bus((void *) kadr); +} + +static int free_dma_iso_ctx(struct dma_iso_ctx **d) +{ + int i; + struct ti_ohci *ohci; + + if ((*d)==NULL) return -1; + + ohci = (struct ti_ohci *)(*d)->ohci; + + DBGMSG(ohci->id, "Freeing dma_iso_ctx %d", (*d)->ctx); + + ohci1394_stop_context(ohci, (*d)->ctrlClear, NULL); + + if ((*d)->buf) rvfree((void *)(*d)->buf, + (*d)->num_desc * (*d)->buf_size); + + if ((*d)->ir_prg) { + for (i=0;i<(*d)->num_desc;i++) + if ((*d)->ir_prg[i]) kfree((*d)->ir_prg[i]); + kfree((*d)->ir_prg); + } + + if ((*d)->it_prg) { + for (i=0;i<(*d)->num_desc;i++) + if ((*d)->it_prg[i]) kfree((*d)->it_prg[i]); + kfree((*d)->it_prg); + } + + if ((*d)->buffer_status) + kfree((*d)->buffer_status); + if ((*d)->last_used_cmd) + kfree((*d)->last_used_cmd); + if ((*d)->next_buffer) + kfree((*d)->next_buffer); + + kfree(*d); + *d = NULL; + + return 0; +} + +static struct dma_iso_ctx * +alloc_dma_iso_ctx(struct ti_ohci *ohci, int type, int ctx, int num_desc, + int buf_size, int channel, unsigned int packet_size) +{ + struct dma_iso_ctx *d=NULL; + int i; + + d = (struct dma_iso_ctx *)kmalloc(sizeof(struct dma_iso_ctx), + GFP_KERNEL); + if (d==NULL) { + PRINT(KERN_ERR, ohci->id, "Failed to allocate dma_iso_ctx"); + return NULL; + } + + memset(d, 0, sizeof(struct dma_iso_ctx)); + + d->ohci = (void *)ohci; + d->ctx = ctx; + d->channel = channel; + d->num_desc = num_desc; + d->frame_size = buf_size; + d->buf_size = MMUPAGE_ALIGN(buf_size); + d->last_buffer = -1; + d->buf = NULL; + d->ir_prg = NULL; + init_waitqueue_head(&d->waitq); + + d->buf = rvmalloc(d->num_desc * d->buf_size); + + if (d->buf == NULL) { + PRINT(KERN_ERR, ohci->id, "Failed to allocate dma buffer"); + free_dma_iso_ctx(&d); + return NULL; + } + memset(d->buf, 0, d->num_desc * d->buf_size); + + if (type == ISO_RECEIVE) { + d->ctrlSet = OHCI1394_IsoRcvContextControlSet+32*d->ctx; + d->ctrlClear = OHCI1394_IsoRcvContextControlClear+32*d->ctx; + d->cmdPtr = OHCI1394_IsoRcvCommandPtr+32*d->ctx; + d->ctxMatch = OHCI1394_IsoRcvContextMatch+32*d->ctx; + + d->ir_prg = kmalloc(d->num_desc * sizeof(struct dma_cmd *), + GFP_KERNEL); + + if (d->ir_prg == NULL) { + PRINT(KERN_ERR, ohci->id, + "Failed to allocate dma ir prg"); + free_dma_iso_ctx(&d); + return NULL; + } + memset(d->ir_prg, 0, d->num_desc * sizeof(struct dma_cmd *)); + + d->nb_cmd = d->buf_size / MMUPAGE_SIZE + 1; + d->left_size = (d->frame_size % MMUPAGE_SIZE) ? + d->frame_size % MMUPAGE_SIZE : MMUPAGE_SIZE; + + for (i=0;inum_desc;i++) { + d->ir_prg[i] = kmalloc(d->nb_cmd * + sizeof(struct dma_cmd), + GFP_KERNEL); + if (d->ir_prg[i] == NULL) { + PRINT(KERN_ERR, ohci->id, + "Failed to allocate dma ir prg"); + free_dma_iso_ctx(&d); + return NULL; + } + } + } + else { /* ISO_TRANSMIT */ + d->ctrlSet = OHCI1394_IsoXmitContextControlSet+16*d->ctx; + d->ctrlClear = OHCI1394_IsoXmitContextControlClear+16*d->ctx; + d->cmdPtr = OHCI1394_IsoXmitCommandPtr+16*d->ctx; + + d->it_prg = kmalloc(d->num_desc * sizeof(struct it_dma_prg *), + GFP_KERNEL); + + if (d->it_prg == NULL) { + PRINT(KERN_ERR, ohci->id, + "Failed to allocate dma it prg"); + free_dma_iso_ctx(&d); + return NULL; + } + memset(d->it_prg, 0, d->num_desc*sizeof(struct it_dma_prg *)); + + d->packet_size = packet_size; + + if (MMUPAGE_SIZE % packet_size || packet_size>4096) { + PRINT(KERN_ERR, ohci->id, + "Packet size %d (page_size: %ld) " + "not yet supported\n", + packet_size, MMUPAGE_SIZE); + free_dma_iso_ctx(&d); + return NULL; + } + + d->nb_cmd = d->frame_size / d->packet_size; + if (d->frame_size % d->packet_size) { + d->nb_cmd++; + d->left_size = d->frame_size % d->packet_size; + } + else + d->left_size = d->packet_size; + + for (i=0;inum_desc;i++) { + d->it_prg[i] = kmalloc(d->nb_cmd * + sizeof(struct it_dma_prg), + GFP_KERNEL); + if (d->it_prg[i] == NULL) { + PRINT(KERN_ERR, ohci->id, + "Failed to allocate dma it prg"); + free_dma_iso_ctx(&d); + return NULL; + } + } + } + + d->buffer_status = kmalloc(d->num_desc * sizeof(unsigned int), + GFP_KERNEL); + d->last_used_cmd = kmalloc(d->num_desc * sizeof(unsigned int), + GFP_KERNEL); + d->next_buffer = kmalloc(d->num_desc * sizeof(int), + GFP_KERNEL); + + if (d->buffer_status == NULL) { + PRINT(KERN_ERR, ohci->id, "Failed to allocate buffer_status"); + free_dma_iso_ctx(&d); + return NULL; + } + if (d->last_used_cmd == NULL) { + PRINT(KERN_ERR, ohci->id, "Failed to allocate last_used_cmd"); + free_dma_iso_ctx(&d); + return NULL; + } + if (d->next_buffer == NULL) { + PRINT(KERN_ERR, ohci->id, "Failed to allocate next_buffer"); + free_dma_iso_ctx(&d); + return NULL; + } + memset(d->buffer_status, 0, d->num_desc * sizeof(unsigned int)); + memset(d->last_used_cmd, 0, d->num_desc * sizeof(unsigned int)); + memset(d->next_buffer, -1, d->num_desc * sizeof(int)); + + spin_lock_init(&d->lock); + + PRINT(KERN_INFO, ohci->id, "Iso %s DMA: %d buffers " + "of size %d allocated for a frame size %d, each with %d prgs", + (type==ISO_RECEIVE) ? "receive" : "transmit", + d->num_desc, d->buf_size, d->frame_size, d->nb_cmd); + + return d; +} + +static void reset_ir_status(struct dma_iso_ctx *d, int n) +{ + int i; + d->ir_prg[n][0].status = 4; + d->ir_prg[n][1].status = MMUPAGE_SIZE-4; + for (i=2;inb_cmd-1;i++) + d->ir_prg[n][i].status = MMUPAGE_SIZE; + d->ir_prg[n][i].status = d->left_size; +} + +static void initialize_dma_ir_prg(struct dma_iso_ctx *d, int n, int flags) +{ + struct dma_cmd *ir_prg = d->ir_prg[n]; + unsigned long buf = (unsigned long)d->buf+n*d->buf_size; + int i; + + /* the first descriptor will read only 4 bytes */ + ir_prg[0].control = (0x280C << 16) | 4; + + /* set the sync flag */ + if (flags & VIDEO1394_SYNC_FRAMES) + ir_prg[0].control |= 0x00030000; + + ir_prg[0].address = kvirt_to_bus(buf); + ir_prg[0].branchAddress = (virt_to_bus(&(ir_prg[1].control)) + & 0xfffffff0) | 0x1; + + /* the second descriptor will read MMUPAGE_SIZE-4 bytes */ + ir_prg[1].control = (0x280C << 16) | (MMUPAGE_SIZE-4); + ir_prg[1].address = kvirt_to_bus(buf+4); + ir_prg[1].branchAddress = (virt_to_bus(&(ir_prg[2].control)) + & 0xfffffff0) | 0x1; + + for (i=2;inb_cmd-1;i++) { + ir_prg[i].control = (0x280C << 16) | MMUPAGE_SIZE; + ir_prg[i].address = kvirt_to_bus(buf+(i-1)*MMUPAGE_SIZE); + + ir_prg[i].branchAddress = + (virt_to_bus(&(ir_prg[i+1].control)) + & 0xfffffff0) | 0x1; + } + + /* the last descriptor will generate an interrupt */ + ir_prg[i].control = (0x283C << 16) | d->left_size; + ir_prg[i].address = kvirt_to_bus(buf+(i-1)*MMUPAGE_SIZE); +} + +static void initialize_dma_ir_ctx(struct dma_iso_ctx *d, int tag, int flags) +{ + struct ti_ohci *ohci = (struct ti_ohci *)d->ohci; + int i; + + d->flags = flags; + + ohci1394_stop_context(ohci, d->ctrlClear, NULL); + + for (i=0;inum_desc;i++) { + initialize_dma_ir_prg(d, i, flags); + reset_ir_status(d, i); + } + + /* reset the ctrl register */ + reg_write(ohci, d->ctrlClear, 0xf0000000); + + /* Set bufferFill */ + reg_write(ohci, d->ctrlSet, 0x80000000); + + /* Set isoch header */ + if (flags & VIDEO1394_INCLUDE_ISO_HEADERS) + reg_write(ohci, d->ctrlSet, 0x40000000); + + /* Set the context match register to match on all tags, + sync for sync tag, and listen to d->channel */ + reg_write(ohci, d->ctxMatch, 0xf0000000|((tag&0xf)<<8)|d->channel); + + /* Set up isoRecvIntMask to generate interrupts */ + reg_write(ohci, OHCI1394_IsoRecvIntMaskSet, 1<ctx); +} + +/* find which context is listening to this channel */ +int ir_ctx_listening(struct video_card *video, int channel) +{ + int i; + struct ti_ohci *ohci = video->ohci; + + for (i=0;inb_iso_rcv_ctx-1;i++) + if (video->ir_context[i]) { + if (video->ir_context[i]->channel==channel) + return i; + } + + PRINT(KERN_ERR, ohci->id, "No iso context is listening to channel %d", + channel); + + return -1; +} + +int it_ctx_talking(struct video_card *video, int channel) +{ + int i; + struct ti_ohci *ohci = video->ohci; + + for (i=0;inb_iso_xmit_ctx;i++) + if (video->it_context[i]) { + if (video->it_context[i]->channel==channel) + return i; + } + + PRINT(KERN_ERR, ohci->id, "No iso context is talking to channel %d", + channel); + + return -1; +} + +int wakeup_dma_ir_ctx(struct ti_ohci *ohci, struct dma_iso_ctx *d) +{ + int i; + + if (d==NULL) { + PRINT(KERN_ERR, ohci->id, "Iso receive event received but " + "context not allocated"); + return -EFAULT; + } + + spin_lock(&d->lock); + for (i=0;inum_desc;i++) { + if (d->ir_prg[i][d->nb_cmd-1].status & 0xFFFF0000) { + reset_ir_status(d, i); + d->buffer_status[i] = VIDEO1394_BUFFER_READY; + } + } + spin_unlock(&d->lock); + if (waitqueue_active(&d->waitq)) wake_up_interruptible(&d->waitq); + return 0; +} + +static inline void put_timestamp(struct ti_ohci *ohci, struct dma_iso_ctx * d, + int n) +{ + unsigned char* buf = d->buf + n * d->buf_size; + u32 cycleTimer; + u32 timeStamp; + + if (n == -1) { + return; + } + + cycleTimer = reg_read(ohci, OHCI1394_IsochronousCycleTimer); + + timeStamp = ((cycleTimer & 0x0fff) + d->syt_offset); /* 11059 = 450 us */ + timeStamp = (timeStamp % 3072 + ((timeStamp / 3072) << 12) + + (cycleTimer & 0xf000)) & 0xffff; + + buf[6] = timeStamp >> 8; + buf[7] = timeStamp & 0xff; + + /* if first packet is empty packet, then put timestamp into the next full one too */ + if ( (d->it_prg[n][0].data[1] >>16) == 0x008) { + buf += d->packet_size; + buf[6] = timeStamp >> 8; + buf[7] = timeStamp & 0xff; + } + + /* do the next buffer frame too in case of irq latency */ + n = d->next_buffer[n]; + if (n == -1) { + return; + } + buf = d->buf + n * d->buf_size; + + timeStamp += (d->last_used_cmd[n] << 12) & 0xffff; + + buf[6] = timeStamp >> 8; + buf[7] = timeStamp & 0xff; + + /* if first packet is empty packet, then put timestamp into the next full one too */ + if ( (d->it_prg[n][0].data[1] >>16) == 0x008) { + buf += d->packet_size; + buf[6] = timeStamp >> 8; + buf[7] = timeStamp & 0xff; + } + +#if 0 + printk("curr: %d, next: %d, cycleTimer: %08x timeStamp: %08x\n", + curr, n, cycleTimer, timeStamp); +#endif +} + +int wakeup_dma_it_ctx(struct ti_ohci *ohci, struct dma_iso_ctx *d) +{ + int i; + + if (d==NULL) { + PRINT(KERN_ERR, ohci->id, "Iso transmit event received but " + "context not allocated"); + return -EFAULT; + } + + spin_lock(&d->lock); + for (i=0;inum_desc;i++) { + if (d->it_prg[i][d->last_used_cmd[i]].end.status& 0xFFFF0000) { + int next = d->next_buffer[i]; + put_timestamp(ohci, d, next); + d->it_prg[i][d->last_used_cmd[i]].end.status = 0; + d->buffer_status[i] = VIDEO1394_BUFFER_READY; + } + } + spin_unlock(&d->lock); + if (waitqueue_active(&d->waitq)) wake_up_interruptible(&d->waitq); + return 0; +} + +static void initialize_dma_it_prg(struct dma_iso_ctx *d, int n, int sync_tag) +{ + struct it_dma_prg *it_prg = d->it_prg[n]; + unsigned long buf = (unsigned long)d->buf+n*d->buf_size; + int i; + d->last_used_cmd[n] = d->nb_cmd - 1; + for (i=0;inb_cmd;i++) { + + it_prg[i].begin.control = OUTPUT_MORE_IMMEDIATE | 8 ; + it_prg[i].begin.address = 0; + + it_prg[i].begin.status = 0; + + it_prg[i].data[0] = + (DMA_SPEED_100 << 16) + | (/* tag */ 1 << 14) + | (d->channel << 8) + | (TCODE_ISO_DATA << 4); + if (i==0) it_prg[i].data[0] |= sync_tag; + it_prg[i].data[1] = d->packet_size << 16; + it_prg[i].data[2] = 0; + it_prg[i].data[3] = 0; + + it_prg[i].end.control = 0x100c0000; + it_prg[i].end.address = + kvirt_to_bus(buf+i*d->packet_size); + + if (inb_cmd-1) { + it_prg[i].end.control |= d->packet_size; + it_prg[i].begin.branchAddress = + (virt_to_bus(&(it_prg[i+1].begin.control)) + & 0xfffffff0) | 0x3; + it_prg[i].end.branchAddress = + (virt_to_bus(&(it_prg[i+1].begin.control)) + & 0xfffffff0) | 0x3; + } + else { + /* the last prg generates an interrupt */ + it_prg[i].end.control |= 0x08300000 | d->left_size; + /* the last prg doesn't branch */ + it_prg[i].begin.branchAddress = 0; + it_prg[i].end.branchAddress = 0; + } + it_prg[i].end.status = 0; + +#if 0 + printk("%d:%d: %08x-%08x ctrl %08x brch %08x d0 %08x d1 %08x\n",n,i, + virt_to_bus(&(it_prg[i].begin.control)), + virt_to_bus(&(it_prg[i].end.control)), + it_prg[i].end.control, + it_prg[i].end.branchAddress, + it_prg[i].data[0], it_prg[i].data[1]); +#endif + } +} + +static void initialize_dma_it_prg_var_packet_queue( + struct dma_iso_ctx *d, int n, unsigned int * packet_sizes, + struct ti_ohci *ohci) +{ + struct it_dma_prg *it_prg = d->it_prg[n]; + int i; + +#if 0 + if (n != -1) { + put_timestamp(ohci, d, n); + } +#endif + d->last_used_cmd[n] = d->nb_cmd - 1; + + for (i = 0; i < d->nb_cmd; i++) { + unsigned int size; + if (packet_sizes[i] > d->packet_size) { + size = d->packet_size; + } else { + size = packet_sizes[i]; + } + it_prg[i].data[1] = size << 16; + it_prg[i].end.control = 0x100c0000; + + if (i < d->nb_cmd-1 && packet_sizes[i+1] != 0) { + it_prg[i].end.control |= size; + it_prg[i].begin.branchAddress = + (virt_to_bus(&(it_prg[i+1].begin.control)) + & 0xfffffff0) | 0x3; + it_prg[i].end.branchAddress = + (virt_to_bus(&(it_prg[i+1].begin.control)) + & 0xfffffff0) | 0x3; + } else { + /* the last prg generates an interrupt */ + it_prg[i].end.control |= 0x08300000 | size; + /* the last prg doesn't branch */ + it_prg[i].begin.branchAddress = 0; + it_prg[i].end.branchAddress = 0; + d->last_used_cmd[n] = i; + break; + } + } +} + +static void initialize_dma_it_ctx(struct dma_iso_ctx *d, int sync_tag, + unsigned int syt_offset, int flags) +{ + struct ti_ohci *ohci = (struct ti_ohci *)d->ohci; + int i; + + d->flags = flags; + d->syt_offset = (syt_offset == 0 ? 11000 : syt_offset); + + ohci1394_stop_context(ohci, d->ctrlClear, NULL); + + for (i=0;inum_desc;i++) + initialize_dma_it_prg(d, i, sync_tag); + + /* Set up isoRecvIntMask to generate interrupts */ + reg_write(ohci, OHCI1394_IsoXmitIntMaskSet, 1<ctx); +} + +static int do_iso_mmap(struct ti_ohci *ohci, struct dma_iso_ctx *d, + const char *adr, unsigned long size) +{ + if (size>d->num_desc * d->buf_size) { + PRINT(KERN_ERR, ohci->id, + "iso context %d buf size is different from mmap size", + d->ctx); + return -EINVAL; + } + if (!d->buf) { + PRINT(KERN_ERR, ohci->id, + "iso context %d is not allocated", d->ctx); + return -EINVAL; + } + + pos=(unsigned long) d->buf; + while (size > 0) { + page = kvirt_to_pa(pos); + if (remap_page_range(start, page, PAGE_SIZE, PAGE_SHARED)) + return -EAGAIN; + start+=PAGE_SIZE; + pos+=PAGE_SIZE; + size-=PAGE_SIZE; + } + return 0; +} + +static int video1394_ioctl(struct inode *inode, struct file *file, + unsigned int cmd, unsigned long arg) +{ + struct video_card *video = NULL; + struct ti_ohci *ohci = NULL; + unsigned long flags; + struct list_head *lh; + + spin_lock_irqsave(&video1394_cards_lock, flags); + if (!list_empty(&video1394_cards)) { + struct video_card *p; + list_for_each(lh, &video1394_cards) { + p = list_entry(lh, struct video_card, list); + if (p->id == MINOR(inode->i_rdev)) { + video = p; + ohci = video->ohci; + break; + } + } + } + spin_unlock_irqrestore(&video1394_cards_lock, flags); + + if (video == NULL) { + PRINT_G(KERN_ERR, __FUNCTION__": Unknown video card for minor %d", MINOR(inode->i_rdev)); + return -EFAULT; + } + + switch(cmd) + { + case VIDEO1394_LISTEN_CHANNEL: + case VIDEO1394_TALK_CHANNEL: + { + struct video1394_mmap v; + u64 mask; + int i; + + if(copy_from_user(&v, (void *)arg, sizeof(v))) + return -EFAULT; + if (v.channel<0 || v.channel>(ISO_CHANNELS-1)) { + PRINT(KERN_ERR, ohci->id, + "Iso channel %d out of bound", v.channel); + return -EFAULT; + } + mask = (u64)0x1<>32),(u32)(mask&0xffffffff), + (u32)(ohci->ISO_channel_usage>>32), + (u32)(ohci->ISO_channel_usage&0xffffffff)); + if (ohci->ISO_channel_usage & mask) { + PRINT(KERN_ERR, ohci->id, + "Channel %d is already taken", v.channel); + return -EFAULT; + } + ohci->ISO_channel_usage |= mask; + + if (v.buf_size<=0) { + PRINT(KERN_ERR, ohci->id, + "Invalid %d length buffer requested",v.buf_size); + return -EFAULT; + } + + if (v.nb_buffers<=0) { + PRINT(KERN_ERR, ohci->id, + "Invalid %d buffers requested",v.nb_buffers); + return -EFAULT; + } + + if (v.nb_buffers * v.buf_size > VIDEO1394_MAX_SIZE) { + PRINT(KERN_ERR, ohci->id, + "%d buffers of size %d bytes is too big", + v.nb_buffers, v.buf_size); + return -EFAULT; + } + + if (cmd == VIDEO1394_LISTEN_CHANNEL) { + /* find a free iso receive context */ + for (i=0;inb_iso_rcv_ctx-1;i++) + if (video->ir_context[i]==NULL) break; + + if (i==(ohci->nb_iso_rcv_ctx-1)) { + PRINT(KERN_ERR, ohci->id, + "No iso context available"); + return -EFAULT; + } + + video->ir_context[i] = + alloc_dma_iso_ctx(ohci, ISO_RECEIVE, i+1, + v.nb_buffers, v.buf_size, + v.channel, 0); + + if (video->ir_context[i] == NULL) { + PRINT(KERN_ERR, ohci->id, + "Couldn't allocate ir context"); + return -EFAULT; + } + initialize_dma_ir_ctx(video->ir_context[i], + v.sync_tag, v.flags); + + video->current_ctx = video->ir_context[i]; + + v.buf_size = video->ir_context[i]->buf_size; + + PRINT(KERN_INFO, ohci->id, + "iso context %d listen on channel %d", i+1, + v.channel); + } + else { + /* find a free iso transmit context */ + for (i=0;inb_iso_xmit_ctx;i++) + if (video->it_context[i]==NULL) break; + + if (i==ohci->nb_iso_xmit_ctx) { + PRINT(KERN_ERR, ohci->id, + "No iso context available"); + return -EFAULT; + } + + video->it_context[i] = + alloc_dma_iso_ctx(ohci, ISO_TRANSMIT, i, + v.nb_buffers, v.buf_size, + v.channel, v.packet_size); + + if (video->it_context[i] == NULL) { + PRINT(KERN_ERR, ohci->id, + "Couldn't allocate it context"); + return -EFAULT; + } + initialize_dma_it_ctx(video->it_context[i], + v.sync_tag, v.syt_offset, v.flags); + + video->current_ctx = video->it_context[i]; + + v.buf_size = video->it_context[i]->buf_size; + + PRINT(KERN_INFO, ohci->id, + "Iso context %d talk on channel %d", i, + v.channel); + } + + if(copy_to_user((void *)arg, &v, sizeof(v))) + return -EFAULT; + + return 0; + } + case VIDEO1394_UNLISTEN_CHANNEL: + case VIDEO1394_UNTALK_CHANNEL: + { + int channel; + u64 mask; + int i; + + if(copy_from_user(&channel, (void *)arg, sizeof(int))) + return -EFAULT; + + if (channel<0 || channel>(ISO_CHANNELS-1)) { + PRINT(KERN_ERR, ohci->id, + "Iso channel %d out of bound", channel); + return -EFAULT; + } + mask = (u64)0x1<ISO_channel_usage & mask)) { + PRINT(KERN_ERR, ohci->id, + "Channel %d is not being used", channel); + return -EFAULT; + } + ohci->ISO_channel_usage &= ~mask; + + if (cmd == VIDEO1394_UNLISTEN_CHANNEL) { + i = ir_ctx_listening(video, channel); + if (i<0) return -EFAULT; + + free_dma_iso_ctx(&video->ir_context[i]); + + PRINT(KERN_INFO, ohci->id, + "Iso context %d stop listening on channel %d", + i+1, channel); + } + else { + i = it_ctx_talking(video, channel); + if (i<0) return -EFAULT; + + free_dma_iso_ctx(&video->it_context[i]); + + PRINT(KERN_INFO, ohci->id, + "Iso context %d stop talking on channel %d", + i, channel); + } + + return 0; + } + case VIDEO1394_LISTEN_QUEUE_BUFFER: + { + struct video1394_wait v; + struct dma_iso_ctx *d; + int i; + + if(copy_from_user(&v, (void *)arg, sizeof(v))) + return -EFAULT; + + i = ir_ctx_listening(video, v.channel); + if (i<0) return -EFAULT; + d = video->ir_context[i]; + + if ((v.buffer<0) || (v.buffer>d->num_desc)) { + PRINT(KERN_ERR, ohci->id, + "Buffer %d out of range",v.buffer); + return -EFAULT; + } + + spin_lock_irqsave(&d->lock,flags); + + if (d->buffer_status[v.buffer]==VIDEO1394_BUFFER_QUEUED) { + PRINT(KERN_ERR, ohci->id, + "Buffer %d is already used",v.buffer); + spin_unlock_irqrestore(&d->lock,flags); + return -EFAULT; + } + + d->buffer_status[v.buffer]=VIDEO1394_BUFFER_QUEUED; + + if (d->last_buffer>=0) + d->ir_prg[d->last_buffer][d->nb_cmd-1].branchAddress = + (virt_to_bus(&(d->ir_prg[v.buffer][0].control)) + & 0xfffffff0) | 0x1; + + d->last_buffer = v.buffer; + + d->ir_prg[d->last_buffer][d->nb_cmd-1].branchAddress = 0; + + spin_unlock_irqrestore(&d->lock,flags); + + if (!(reg_read(ohci, d->ctrlSet) & 0x8000)) + { + DBGMSG(ohci->id, "Starting iso DMA ctx=%d",d->ctx); + + /* Tell the controller where the first program is */ + reg_write(ohci, d->cmdPtr, + virt_to_bus(&(d->ir_prg[v.buffer][0]))|0x1); + + /* Run IR context */ + reg_write(ohci, d->ctrlSet, 0x8000); + } + else { + /* Wake up dma context if necessary */ + if (!(reg_read(ohci, d->ctrlSet) & 0x400)) { + PRINT(KERN_INFO, ohci->id, + "Waking up iso dma ctx=%d", d->ctx); + reg_write(ohci, d->ctrlSet, 0x1000); + } + } + return 0; + + } + case VIDEO1394_LISTEN_WAIT_BUFFER: + { + struct video1394_wait v; + struct dma_iso_ctx *d; + int i; + + if(copy_from_user(&v, (void *)arg, sizeof(v))) + return -EFAULT; + + i = ir_ctx_listening(video, v.channel); + if (i<0) return -EFAULT; + d = video->ir_context[i]; + + if ((v.buffer<0) || (v.buffer>d->num_desc)) { + PRINT(KERN_ERR, ohci->id, + "Buffer %d out of range",v.buffer); + return -EFAULT; + } + + /* + * I change the way it works so that it returns + * the last received frame. + */ + spin_lock_irqsave(&d->lock, flags); + switch(d->buffer_status[v.buffer]) { + case VIDEO1394_BUFFER_READY: + d->buffer_status[v.buffer]=VIDEO1394_BUFFER_FREE; + break; + case VIDEO1394_BUFFER_QUEUED: +#if 1 + while(d->buffer_status[v.buffer]!= + VIDEO1394_BUFFER_READY) { + spin_unlock_irqrestore(&d->lock, flags); + interruptible_sleep_on(&d->waitq); + spin_lock_irqsave(&d->lock, flags); + if(signal_pending(current)) { + spin_unlock_irqrestore(&d->lock,flags); + return -EINTR; + } + } +#else + if (wait_event_interruptible(d->waitq, + d->buffer_status[v.buffer] + == VIDEO1394_BUFFER_READY) + == -ERESTARTSYS) + return -EINTR; +#endif + d->buffer_status[v.buffer]=VIDEO1394_BUFFER_FREE; + break; + default: + PRINT(KERN_ERR, ohci->id, + "Buffer %d is not queued",v.buffer); + spin_unlock_irqrestore(&d->lock, flags); + return -EFAULT; + } + + /* + * Look ahead to see how many more buffers have been received + */ + i=0; + while (d->buffer_status[(v.buffer+1)%d->num_desc]== + VIDEO1394_BUFFER_READY) { + v.buffer=(v.buffer+1)%d->num_desc; + i++; + } + spin_unlock_irqrestore(&d->lock, flags); + + v.buffer=i; + if(copy_to_user((void *)arg, &v, sizeof(v))) + return -EFAULT; + + return 0; + } + case VIDEO1394_TALK_QUEUE_BUFFER: + { + struct video1394_wait v; + struct video1394_queue_variable qv; + struct dma_iso_ctx *d; + int i; + + if(copy_from_user(&v, (void *)arg, sizeof(v))) + return -EFAULT; + + i = it_ctx_talking(video, v.channel); + if (i<0) return -EFAULT; + d = video->it_context[i]; + + if ((v.buffer<0) || (v.buffer>d->num_desc)) { + PRINT(KERN_ERR, ohci->id, + "Buffer %d out of range",v.buffer); + return -EFAULT; + } + + if (d->flags & VIDEO1394_VARIABLE_PACKET_SIZE) { + if (copy_from_user(&qv, (void *)arg, sizeof(qv))) + return -EFAULT; + if (!access_ok(VERIFY_READ, qv.packet_sizes, + d->nb_cmd * sizeof(unsigned int))) { + return -EFAULT; + } + } + + spin_lock_irqsave(&d->lock,flags); + + if (d->buffer_status[v.buffer]!=VIDEO1394_BUFFER_FREE) { + PRINT(KERN_ERR, ohci->id, + "Buffer %d is already used",v.buffer); + spin_unlock_irqrestore(&d->lock,flags); + return -EFAULT; + } + + if (d->flags & VIDEO1394_VARIABLE_PACKET_SIZE) { + initialize_dma_it_prg_var_packet_queue( + d, v.buffer, qv.packet_sizes, + ohci); + } + + d->buffer_status[v.buffer]=VIDEO1394_BUFFER_QUEUED; + + if (d->last_buffer>=0) { + d->it_prg[d->last_buffer] + [ d->last_used_cmd[d->last_buffer] + ].end.branchAddress = + (virt_to_bus(&(d->it_prg[v.buffer][0].begin.control)) + & 0xfffffff0) | 0x3; + + d->it_prg[d->last_buffer] + [d->last_used_cmd[d->last_buffer] + ].begin.branchAddress = + (virt_to_bus(&(d->it_prg[v.buffer][0].begin.control)) + & 0xfffffff0) | 0x3; + d->next_buffer[d->last_buffer] = v.buffer; + } + d->last_buffer = v.buffer; + d->next_buffer[d->last_buffer] = -1; + + d->it_prg[d->last_buffer][d->last_used_cmd[d->last_buffer]].end.branchAddress = 0; + + spin_unlock_irqrestore(&d->lock,flags); + + if (!(reg_read(ohci, d->ctrlSet) & 0x8000)) + { + DBGMSG(ohci->id, "Starting iso transmit DMA ctx=%d", + d->ctx); + put_timestamp(ohci, d, d->last_buffer); + + /* Tell the controller where the first program is */ + reg_write(ohci, d->cmdPtr, + virt_to_bus(&(d->it_prg[v.buffer][0]))|0x3); + + /* Run IT context */ + reg_write(ohci, d->ctrlSet, 0x8000); + } + else { + /* Wake up dma context if necessary */ + if (!(reg_read(ohci, d->ctrlSet) & 0x400)) { + PRINT(KERN_INFO, ohci->id, + "Waking up iso transmit dma ctx=%d", + d->ctx); + put_timestamp(ohci, d, d->last_buffer); + reg_write(ohci, d->ctrlSet, 0x1000); + } + } + return 0; + + } + case VIDEO1394_TALK_WAIT_BUFFER: + { + struct video1394_wait v; + struct dma_iso_ctx *d; + int i; + + if(copy_from_user(&v, (void *)arg, sizeof(v))) + return -EFAULT; + + i = it_ctx_talking(video, v.channel); + if (i<0) return -EFAULT; + d = video->it_context[i]; + + if ((v.buffer<0) || (v.buffer>d->num_desc)) { + PRINT(KERN_ERR, ohci->id, + "Buffer %d out of range",v.buffer); + return -EFAULT; + } + + switch(d->buffer_status[v.buffer]) { + case VIDEO1394_BUFFER_READY: + d->buffer_status[v.buffer]=VIDEO1394_BUFFER_FREE; + return 0; + case VIDEO1394_BUFFER_QUEUED: +#if 1 + while(d->buffer_status[v.buffer]!= + VIDEO1394_BUFFER_READY) { + interruptible_sleep_on(&d->waitq); + if(signal_pending(current)) return -EINTR; + } +#else + if (wait_event_interruptible(d->waitq, + d->buffer_status[v.buffer] + == VIDEO1394_BUFFER_READY) + == -ERESTARTSYS) + return -EINTR; +#endif + d->buffer_status[v.buffer]=VIDEO1394_BUFFER_FREE; + return 0; + default: + PRINT(KERN_ERR, ohci->id, + "Buffer %d is not queued",v.buffer); + return -EFAULT; + } + } + default: + return -EINVAL; + } +} + +/* + * This maps the vmalloced and reserved buffer to user space. + * + * FIXME: + * - PAGE_READONLY should suffice!? + * - remap_page_range is kind of inefficient for page by page remapping. + * But e.g. pte_alloc() does not work in modules ... :-( + */ + +int video1394_mmap(struct file *file, struct vm_area_struct *vma) +{ + struct video_card *video = NULL; + struct ti_ohci *ohci; + int res = -EINVAL; + unsigned long flags; + struct list_head *lh; + + spin_lock_irqsave(&video1394_cards_lock, flags); + if (!list_empty(&video1394_cards)) { + struct video_card *p; + list_for_each(lh, &video1394_cards) { + p = list_entry(lh, struct video_card, list); + if (p->id == MINOR(file->f_dentry->d_inode->i_rdev)) { + video = p; + break; + } + } + } + spin_unlock_irqrestore(&video1394_cards_lock, flags); + + if (video == NULL) { + PRINT_G(KERN_ERR, __FUNCTION__": Unknown video card for minor %d", + MINOR(file->f_dentry->d_inode->i_rdev)); + return -EFAULT; + } + + lock_kernel(); + ohci = video->ohci; + + if (video->current_ctx == NULL) { + PRINT(KERN_ERR, ohci->id, "Current iso context not set"); + } else + res = do_iso_mmap(ohci, video->current_ctx, + (char *)vma->vm_start, + (unsigned long)(vma->vm_end-vma->vm_start)); + unlock_kernel(); + return res; +} + +static int video1394_open(struct inode *inode, struct file *file) +{ + int i = MINOR(inode->i_rdev); + unsigned long flags; + struct video_card *video = NULL; + struct list_head *lh; + + spin_lock_irqsave(&video1394_cards_lock, flags); + if (!list_empty(&video1394_cards)) { + struct video_card *p; + list_for_each(lh, &video1394_cards) { + p = list_entry(lh, struct video_card, list); + if (p->id == i) { + video = p; + break; + } + } + } + spin_unlock_irqrestore(&video1394_cards_lock, flags); + + if (video == NULL) + return -EIO; + + V22_COMPAT_MOD_INC_USE_COUNT; + + return 0; +} + +static int video1394_release(struct inode *inode, struct file *file) +{ + struct video_card *video = NULL; + struct ti_ohci *ohci; + u64 mask; + int i; + unsigned long flags; + struct list_head *lh; + + spin_lock_irqsave(&video1394_cards_lock, flags); + if (!list_empty(&video1394_cards)) { + struct video_card *p; + list_for_each(lh, &video1394_cards) { + p = list_entry(lh, struct video_card, list); + if (p->id == MINOR(inode->i_rdev)) { + video = p; + break; + } + } + } + spin_unlock_irqrestore(&video1394_cards_lock, flags); + + if (video == NULL) { + PRINT_G(KERN_ERR, __FUNCTION__": Unknown device for minor %d", + MINOR(inode->i_rdev)); + return 1; + } + + ohci = video->ohci; + + lock_kernel(); + for (i=0;inb_iso_rcv_ctx-1;i++) + if (video->ir_context[i]) { + mask = (u64)0x1<ir_context[i]->channel; + if (!(ohci->ISO_channel_usage & mask)) + PRINT(KERN_ERR, ohci->id, + "Channel %d is not being used", + video->ir_context[i]->channel); + else + ohci->ISO_channel_usage &= ~mask; + PRINT(KERN_INFO, ohci->id, + "Iso receive context %d stop listening " + "on channel %d", i+1, + video->ir_context[i]->channel); + free_dma_iso_ctx(&video->ir_context[i]); + } + + for (i=0;inb_iso_xmit_ctx;i++) + if (video->it_context[i]) { + mask = (u64)0x1<it_context[i]->channel; + if (!(ohci->ISO_channel_usage & mask)) + PRINT(KERN_ERR, ohci->id, + "Channel %d is not being used", + video->it_context[i]->channel); + else + ohci->ISO_channel_usage &= ~mask; + PRINT(KERN_INFO, ohci->id, + "Iso transmit context %d stop talking " + "on channel %d", i+1, + video->it_context[i]->channel); + free_dma_iso_ctx(&video->it_context[i]); + } + + V22_COMPAT_MOD_DEC_USE_COUNT; + + unlock_kernel(); + return 0; +} + +static void irq_handler(int card, quadlet_t isoRecvIntEvent, + quadlet_t isoXmitIntEvent) +{ + int i; + unsigned long flags; + struct video_card *video = NULL; + struct list_head *lh; + + spin_lock_irqsave(&video1394_cards_lock, flags); + if (!list_empty(&video1394_cards)) { + struct video_card *p; + list_for_each(lh, &video1394_cards) { + p = list_entry(lh, struct video_card, list); + if (p->id == card) { + video = p; + break; + } + } + } + spin_unlock_irqrestore(&video1394_cards_lock, flags); + + if (video == NULL) { + PRINT_G(KERN_ERR, __FUNCTION__": Unknown card number %d!!", + card); + return; + } + + DBGMSG(card, "Iso event Recv: %08x Xmit: %08x", + isoRecvIntEvent, isoXmitIntEvent); + + for (i=0;iohci->nb_iso_rcv_ctx-1;i++) + if (isoRecvIntEvent & (1<<(i+1))) + wakeup_dma_ir_ctx(video->ohci, + video->ir_context[i]); + + for (i=0;iohci->nb_iso_xmit_ctx;i++) + if (isoXmitIntEvent & (1<ohci, + video->it_context[i]); +} + +static struct file_operations video1394_fops= +{ + OWNER_THIS_MODULE + ioctl: video1394_ioctl, + mmap: video1394_mmap, + open: video1394_open, + release: video1394_release +}; + +static int video1394_init(struct ti_ohci *ohci) +{ + struct video_card *video = kmalloc(sizeof(struct video_card), GFP_KERNEL); + unsigned long flags; + char name[16]; + + if (video == NULL) { + PRINT(KERN_ERR, ohci->id, "Cannot allocate video_card"); + return -1; + } + + memset(video, 0, sizeof(struct video_card)); + + spin_lock_irqsave(&video1394_cards_lock, flags); + INIT_LIST_HEAD(&video->list); + list_add_tail(&video->list, &video1394_cards); + spin_unlock_irqrestore(&video1394_cards_lock, flags); + + if (ohci1394_register_video(ohci, &video_tmpl)<0) { + PRINT(KERN_ERR, ohci->id, "Register_video failed"); + return -1; + } + + video->id = ohci->id; + video->ohci = ohci; + + /* Iso receive dma contexts */ + video->ir_context = (struct dma_iso_ctx **) + kmalloc((ohci->nb_iso_rcv_ctx-1)* + sizeof(struct dma_iso_ctx *), GFP_KERNEL); + if (video->ir_context) + memset(video->ir_context, 0, + (ohci->nb_iso_rcv_ctx-1)*sizeof(struct dma_iso_ctx *)); + else { + PRINT(KERN_ERR, ohci->id, "Cannot allocate ir_context"); + return -1; + } + + /* Iso transmit dma contexts */ + video->it_context = (struct dma_iso_ctx **) + kmalloc(ohci->nb_iso_xmit_ctx * + sizeof(struct dma_iso_ctx *), GFP_KERNEL); + if (video->it_context) + memset(video->it_context, 0, + ohci->nb_iso_xmit_ctx * sizeof(struct dma_iso_ctx *)); + else { + PRINT(KERN_ERR, ohci->id, "Cannot allocate it_context"); + return -1; + } + + sprintf(name, "%d", video->id); + video->devfs = devfs_register(devfs_handle, name, + DEVFS_FL_AUTO_OWNER, + VIDEO1394_MAJOR, 0, + S_IFCHR | S_IRUSR | S_IWUSR, + &video1394_fops, NULL); + + return 0; +} + +/* Must be called under spinlock */ +static void remove_card(struct video_card *video) +{ + int i; + unsigned long flags; + + ohci1394_unregister_video(video->ohci, &video_tmpl); + + devfs_unregister(video->devfs); + + /* Free the iso receive contexts */ + if (video->ir_context) { + for (i=0;iohci->nb_iso_rcv_ctx-1;i++) { + free_dma_iso_ctx(&video->ir_context[i]); + } + kfree(video->ir_context); + } + + /* Free the iso transmit contexts */ + if (video->it_context) { + for (i=0;iohci->nb_iso_xmit_ctx;i++) { + free_dma_iso_ctx(&video->it_context[i]); + } + kfree(video->it_context); + } + spin_lock_irqsave(&video1394_cards_lock, flags); + list_del(&video->list); + spin_unlock_irqrestore(&video1394_cards_lock, flags); + + kfree(video); +} + +static void video1394_remove_host (struct hpsb_host *host) +{ + struct ti_ohci *ohci; + unsigned long flags; + struct list_head *lh; + + /* We only work with the OHCI-1394 driver */ + if (strcmp(host->template->name, OHCI1394_DRIVER_NAME)) + return; + + ohci = (struct ti_ohci *)host->hostdata; + + spin_lock_irqsave(&video1394_cards_lock, flags); + if (!list_empty(&video1394_cards)) { + struct video_card *p; + list_for_each(lh, &video1394_cards) { + p = list_entry(lh, struct video_card, list); + if (p ->ohci == ohci) { + remove_card(p); + return; + } + } + } + spin_unlock_irqrestore(&video1394_cards_lock, flags); + + return; +} + +static void video1394_add_host (struct hpsb_host *host) +{ + struct ti_ohci *ohci; + + /* We only work with the OHCI-1394 driver */ + if (strcmp(host->template->name, OHCI1394_DRIVER_NAME)) + return; + + ohci = (struct ti_ohci *)host->hostdata; + + video1394_init(ohci); + + return; +} + +static struct hpsb_highlevel_ops hl_ops = { + add_host: video1394_add_host, + remove_host: video1394_remove_host, +}; + +MODULE_AUTHOR("Sebastien Rougeaux "); +MODULE_DESCRIPTION("driver for digital video on OHCI board"); +MODULE_SUPPORTED_DEVICE(VIDEO1394_DRIVER_NAME); + +static void __exit video1394_exit_module (void) +{ + hpsb_unregister_highlevel (hl_handle); + + devfs_unregister(devfs_handle); + devfs_unregister_chrdev(VIDEO1394_MAJOR, VIDEO1394_DRIVER_NAME); + + PRINT_G(KERN_INFO, "Removed " VIDEO1394_DRIVER_NAME " module\n"); +} + +static int __init video1394_init_module (void) +{ + if (devfs_register_chrdev(VIDEO1394_MAJOR, VIDEO1394_DRIVER_NAME, + &video1394_fops)) { + PRINT_G(KERN_ERR, "video1394: unable to get major %d\n", + VIDEO1394_MAJOR); + return -EIO; + } +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,3,0) + devfs_handle = devfs_mk_dir(NULL, VIDEO1394_DRIVER_NAME, + strlen(VIDEO1394_DRIVER_NAME), NULL); +#else + devfs_handle = devfs_mk_dir(NULL, VIDEO1394_DRIVER_NAME, NULL); +#endif + + hl_handle = hpsb_register_highlevel (VIDEO1394_DRIVER_NAME, &hl_ops); + if (hl_handle == NULL) { + PRINT_G(KERN_ERR, "No more memory for driver\n"); + devfs_unregister(devfs_handle); + devfs_unregister_chrdev(VIDEO1394_MAJOR, VIDEO1394_DRIVER_NAME); + return -ENOMEM; + } + + return 0; +} + +module_init(video1394_init_module); +module_exit(video1394_exit_module); diff -urpN linux-2.4.9-linus/drivers/md/lvm-snap.c linux-2.4.9-larpage/drivers/md/lvm-snap.c --- linux-2.4.9-linus/drivers/md/lvm-snap.c 2001-08-15 01:22:15.000000000 -0700 +++ linux-2.4.9-larpage/drivers/md/lvm-snap.c 2002-11-20 02:02:43.000000000 -0800 @@ -487,10 +487,9 @@ static int calc_max_buckets(void) { unsigned long mem; - mem = num_physpages << PAGE_SHIFT; - mem /= 100; - mem *= 2; - mem /= sizeof(struct list_head); + mem = num_physpages; + mem /= 50 * sizeof(struct list_head); + mem <<= PAGE_SHIFT; return mem; } diff -urpN linux-2.4.9-linus/drivers/md/md.c linux-2.4.9-larpage/drivers/md/md.c --- linux-2.4.9-linus/drivers/md/md.c 2001-08-12 12:39:02.000000000 -0700 +++ linux-2.4.9-larpage/drivers/md/md.c 2002-11-20 02:02:44.000000000 -0800 @@ -448,10 +448,9 @@ static int alloc_array_sb (mddev_t * mdd return 0; } - mddev->sb = (mdp_super_t *) __get_free_page (GFP_KERNEL); + mddev->sb = (mdp_super_t *) get_zeroed_page(GFP_KERNEL); if (!mddev->sb) return -ENOMEM; - md_clear_page(mddev->sb); return 0; } @@ -460,13 +459,11 @@ static int alloc_disk_sb (mdk_rdev_t * r if (rdev->sb) MD_BUG(); - rdev->sb = (mdp_super_t *) __get_free_page(GFP_KERNEL); + rdev->sb = (mdp_super_t *) get_zeroed_page(GFP_KERNEL); if (!rdev->sb) { printk (OUT_OF_MEM); return -EINVAL; } - md_clear_page(rdev->sb); - return 0; } @@ -1513,9 +1510,9 @@ static int device_size_calculation (mdde readahead = MD_READAHEAD; if ((sb->level == 0) || (sb->level == 4) || (sb->level == 5)) { - readahead = (mddev->sb->chunk_size>>PAGE_SHIFT) * 4 * data_disks; - if (readahead < data_disks * (MAX_SECTORS>>(PAGE_SHIFT-9))*2) - readahead = data_disks * (MAX_SECTORS>>(PAGE_SHIFT-9))*2; + readahead = (mddev->sb->chunk_size>>READAHEAD_SHIFT) * 4 * data_disks; + if (readahead < data_disks * (MAX_SECTORS>>(READAHEAD_SHIFT-9))*2) + readahead = data_disks * (MAX_SECTORS>>(READAHEAD_SHIFT-9))*2; } else { if (sb->level == -3) readahead = 0; @@ -1523,11 +1520,11 @@ static int device_size_calculation (mdde md_maxreadahead[mdidx(mddev)] = readahead; printk(KERN_INFO "md%d: max total readahead window set to %ldk\n", - mdidx(mddev), readahead*(PAGE_SIZE/1024)); + mdidx(mddev), readahead*(READAHEAD_UNIT/1024)); printk(KERN_INFO "md%d: %d data-disks, max readahead per data-disk: %ldk\n", - mdidx(mddev), data_disks, readahead/data_disks*(PAGE_SIZE/1024)); + mdidx(mddev), data_disks, readahead/data_disks*(READAHEAD_UNIT/1024)); return 0; abort: return 1; @@ -3265,7 +3262,7 @@ recheck: /* * Tune reconstruction: */ - window = MAX_READAHEAD*(PAGE_SIZE/512); + window = MAX_READAHEAD*(READAHEAD_UNIT/512); printk(KERN_INFO "md: using %dk window, over a total of %d blocks.\n",window/2,max_sectors/2); atomic_set(&mddev->recovery_active, 0); diff -urpN linux-2.4.9-linus/drivers/md/md.c.orig linux-2.4.9-larpage/drivers/md/md.c.orig --- linux-2.4.9-linus/drivers/md/md.c.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/drivers/md/md.c.orig 2002-11-20 02:02:44.000000000 -0800 @@ -0,0 +1,3903 @@ +/* + md.c : Multiple Devices driver for Linux + Copyright (C) 1998, 1999, 2000 Ingo Molnar + + completely rewritten, based on the MD driver code from Marc Zyngier + + Changes: + + - RAID-1/RAID-5 extensions by Miguel de Icaza, Gadi Oxman, Ingo Molnar + - boot support for linear and striped mode by Harald Hoyer + - kerneld support by Boris Tobotras + - kmod support by: Cyrus Durgin + - RAID0 bugfixes: Mark Anthony Lisher + - Devfs support by Richard Gooch + + - lots of fixes and improvements to the RAID1/RAID5 and generic + RAID code (such as request based resynchronization): + + Neil Brown . + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2, or (at your option) + any later version. + + You should have received a copy of the GNU General Public License + (for example /usr/src/linux/COPYING); if not, write to the Free + Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +*/ + +#include +#include +#include +#include +#include +#include + +#include + +#ifdef CONFIG_KMOD +#include +#endif + +#define __KERNEL_SYSCALLS__ +#include + +#include + +extern asmlinkage int sys_sched_yield(void); +extern asmlinkage long sys_setsid(void); + +#define MAJOR_NR MD_MAJOR +#define MD_DRIVER + +#include + +#define DEBUG 0 +#if DEBUG +# define dprintk(x...) printk(x) +#else +# define dprintk(x...) do { } while(0) +#endif + +#ifndef MODULE +static void autostart_arrays (void); +#endif + +static mdk_personality_t *pers[MAX_PERSONALITY]; + +/* + * Current RAID-1,4,5 parallel reconstruction 'guaranteed speed limit' + * is 100 KB/sec, so the extra system load does not show up that much. + * Increase it if you want to have more _guaranteed_ speed. Note that + * the RAID driver will use the maximum available bandwith if the IO + * subsystem is idle. There is also an 'absolute maximum' reconstruction + * speed limit - in case reconstruction slows down your system despite + * idle IO detection. + * + * you can change it via /proc/sys/dev/raid/speed_limit_min and _max. + */ + +static int sysctl_speed_limit_min = 100; +static int sysctl_speed_limit_max = 100000; + +static struct ctl_table_header *raid_table_header; + +static ctl_table raid_table[] = { + {DEV_RAID_SPEED_LIMIT_MIN, "speed_limit_min", + &sysctl_speed_limit_min, sizeof(int), 0644, NULL, &proc_dointvec}, + {DEV_RAID_SPEED_LIMIT_MAX, "speed_limit_max", + &sysctl_speed_limit_max, sizeof(int), 0644, NULL, &proc_dointvec}, + {0} +}; + +static ctl_table raid_dir_table[] = { + {DEV_RAID, "raid", NULL, 0, 0555, raid_table}, + {0} +}; + +static ctl_table raid_root_table[] = { + {CTL_DEV, "dev", NULL, 0, 0555, raid_dir_table}, + {0} +}; + +/* + * these have to be allocated separately because external + * subsystems want to have a pre-defined structure + */ +struct hd_struct md_hd_struct[MAX_MD_DEVS]; +static int md_blocksizes[MAX_MD_DEVS]; +static int md_hardsect_sizes[MAX_MD_DEVS]; +static int md_maxreadahead[MAX_MD_DEVS]; +static mdk_thread_t *md_recovery_thread; + +int md_size[MAX_MD_DEVS]; + +extern struct block_device_operations md_fops; +static devfs_handle_t devfs_handle; + +static struct gendisk md_gendisk= +{ + major: MD_MAJOR, + major_name: "md", + minor_shift: 0, + max_p: 1, + part: md_hd_struct, + sizes: md_size, + nr_real: MAX_MD_DEVS, + real_devices: NULL, + next: NULL, + fops: &md_fops, +}; + +/* + * Enables to iterate over all existing md arrays + */ +static MD_LIST_HEAD(all_mddevs); + +/* + * The mapping between kdev and mddev is not necessary a simple + * one! Eg. HSM uses several sub-devices to implement Logical + * Volumes. All these sub-devices map to the same mddev. + */ +dev_mapping_t mddev_map[MAX_MD_DEVS]; + +void add_mddev_mapping (mddev_t * mddev, kdev_t dev, void *data) +{ + unsigned int minor = MINOR(dev); + + if (MAJOR(dev) != MD_MAJOR) { + MD_BUG(); + return; + } + if (mddev_map[minor].mddev != NULL) { + MD_BUG(); + return; + } + mddev_map[minor].mddev = mddev; + mddev_map[minor].data = data; +} + +void del_mddev_mapping (mddev_t * mddev, kdev_t dev) +{ + unsigned int minor = MINOR(dev); + + if (MAJOR(dev) != MD_MAJOR) { + MD_BUG(); + return; + } + if (mddev_map[minor].mddev != mddev) { + MD_BUG(); + return; + } + mddev_map[minor].mddev = NULL; + mddev_map[minor].data = NULL; +} + +static int md_make_request (request_queue_t *q, int rw, struct buffer_head * bh) +{ + mddev_t *mddev = kdev_to_mddev(bh->b_rdev); + + if (mddev && mddev->pers) + return mddev->pers->make_request(mddev, rw, bh); + else { + buffer_IO_error(bh); + return 0; + } +} + +static mddev_t * alloc_mddev (kdev_t dev) +{ + mddev_t *mddev; + + if (MAJOR(dev) != MD_MAJOR) { + MD_BUG(); + return 0; + } + mddev = (mddev_t *) kmalloc(sizeof(*mddev), GFP_KERNEL); + if (!mddev) + return NULL; + + memset(mddev, 0, sizeof(*mddev)); + + mddev->__minor = MINOR(dev); + init_MUTEX(&mddev->reconfig_sem); + init_MUTEX(&mddev->recovery_sem); + init_MUTEX(&mddev->resync_sem); + MD_INIT_LIST_HEAD(&mddev->disks); + MD_INIT_LIST_HEAD(&mddev->all_mddevs); + atomic_set(&mddev->active, 0); + + /* + * The 'base' mddev is the one with data NULL. + * personalities can create additional mddevs + * if necessary. + */ + add_mddev_mapping(mddev, dev, 0); + md_list_add(&mddev->all_mddevs, &all_mddevs); + + MOD_INC_USE_COUNT; + + return mddev; +} + +struct gendisk * find_gendisk (kdev_t dev) +{ + struct gendisk *tmp = gendisk_head; + + while (tmp != NULL) { + if (tmp->major == MAJOR(dev)) + return (tmp); + tmp = tmp->next; + } + return (NULL); +} + +mdk_rdev_t * find_rdev_nr(mddev_t *mddev, int nr) +{ + mdk_rdev_t * rdev; + struct md_list_head *tmp; + + ITERATE_RDEV(mddev,rdev,tmp) { + if (rdev->desc_nr == nr) + return rdev; + } + return NULL; +} + +mdk_rdev_t * find_rdev(mddev_t * mddev, kdev_t dev) +{ + struct md_list_head *tmp; + mdk_rdev_t *rdev; + + ITERATE_RDEV(mddev,rdev,tmp) { + if (rdev->dev == dev) + return rdev; + } + return NULL; +} + +static MD_LIST_HEAD(device_names); + +char * partition_name (kdev_t dev) +{ + struct gendisk *hd; + static char nomem [] = ""; + dev_name_t *dname; + struct md_list_head *tmp = device_names.next; + + while (tmp != &device_names) { + dname = md_list_entry(tmp, dev_name_t, list); + if (dname->dev == dev) + return dname->name; + tmp = tmp->next; + } + + dname = (dev_name_t *) kmalloc(sizeof(*dname), GFP_KERNEL); + + if (!dname) + return nomem; + /* + * ok, add this new device name to the list + */ + hd = find_gendisk (dev); + dname->name = NULL; + if (hd) + dname->name = disk_name (hd, MINOR(dev), dname->namebuf); + if (!dname->name) { + sprintf (dname->namebuf, "[dev %s]", kdevname(dev)); + dname->name = dname->namebuf; + } + + dname->dev = dev; + MD_INIT_LIST_HEAD(&dname->list); + md_list_add(&dname->list, &device_names); + + return dname->name; +} + +static unsigned int calc_dev_sboffset (kdev_t dev, mddev_t *mddev, + int persistent) +{ + unsigned int size = 0; + + if (blk_size[MAJOR(dev)]) + size = blk_size[MAJOR(dev)][MINOR(dev)]; + if (persistent) + size = MD_NEW_SIZE_BLOCKS(size); + return size; +} + +static unsigned int calc_dev_size (kdev_t dev, mddev_t *mddev, int persistent) +{ + unsigned int size; + + size = calc_dev_sboffset(dev, mddev, persistent); + if (!mddev->sb) { + MD_BUG(); + return size; + } + if (mddev->sb->chunk_size) + size &= ~(mddev->sb->chunk_size/1024 - 1); + return size; +} + +static unsigned int zoned_raid_size (mddev_t *mddev) +{ + unsigned int mask; + mdk_rdev_t * rdev; + struct md_list_head *tmp; + + if (!mddev->sb) { + MD_BUG(); + return -EINVAL; + } + /* + * do size and offset calculations. + */ + mask = ~(mddev->sb->chunk_size/1024 - 1); + + ITERATE_RDEV(mddev,rdev,tmp) { + rdev->size &= mask; + md_size[mdidx(mddev)] += rdev->size; + } + return 0; +} + +/* + * We check wether all devices are numbered from 0 to nb_dev-1. The + * order is guaranteed even after device name changes. + * + * Some personalities (raid0, linear) use this. Personalities that + * provide data have to be able to deal with loss of individual + * disks, so they do their checking themselves. + */ +int md_check_ordering (mddev_t *mddev) +{ + int i, c; + mdk_rdev_t *rdev; + struct md_list_head *tmp; + + /* + * First, all devices must be fully functional + */ + ITERATE_RDEV(mddev,rdev,tmp) { + if (rdev->faulty) { + printk("md: md%d's device %s faulty, aborting.\n", + mdidx(mddev), partition_name(rdev->dev)); + goto abort; + } + } + + c = 0; + ITERATE_RDEV(mddev,rdev,tmp) { + c++; + } + if (c != mddev->nb_dev) { + MD_BUG(); + goto abort; + } + if (mddev->nb_dev != mddev->sb->raid_disks) { + printk("md: md%d, array needs %d disks, has %d, aborting.\n", + mdidx(mddev), mddev->sb->raid_disks, mddev->nb_dev); + goto abort; + } + /* + * Now the numbering check + */ + for (i = 0; i < mddev->nb_dev; i++) { + c = 0; + ITERATE_RDEV(mddev,rdev,tmp) { + if (rdev->desc_nr == i) + c++; + } + if (!c) { + printk("md: md%d, missing disk #%d, aborting.\n", + mdidx(mddev), i); + goto abort; + } + if (c > 1) { + printk("md: md%d, too many disks #%d, aborting.\n", + mdidx(mddev), i); + goto abort; + } + } + return 0; +abort: + return 1; +} + +static void remove_descriptor (mdp_disk_t *disk, mdp_super_t *sb) +{ + if (disk_active(disk)) { + sb->working_disks--; + } else { + if (disk_spare(disk)) { + sb->spare_disks--; + sb->working_disks--; + } else { + sb->failed_disks--; + } + } + sb->nr_disks--; + disk->major = 0; + disk->minor = 0; + mark_disk_removed(disk); +} + +#define BAD_MAGIC KERN_ERR \ +"md: invalid raid superblock magic on %s\n" + +#define BAD_MINOR KERN_ERR \ +"md: %s: invalid raid minor (%x)\n" + +#define OUT_OF_MEM KERN_ALERT \ +"md: out of memory.\n" + +#define NO_SB KERN_ERR \ +"md: disabled device %s, could not read superblock.\n" + +#define BAD_CSUM KERN_WARNING \ +"md: invalid superblock checksum on %s\n" + +static int alloc_array_sb (mddev_t * mddev) +{ + if (mddev->sb) { + MD_BUG(); + return 0; + } + + mddev->sb = (mdp_super_t *) get_zeroed_page(GFP_KERNEL); + if (!mddev->sb) + return -ENOMEM; + return 0; +} + +static int alloc_disk_sb (mdk_rdev_t * rdev) +{ + if (rdev->sb) + MD_BUG(); + + rdev->sb = (mdp_super_t *) get_zeroed_page(GFP_KERNEL); + if (!rdev->sb) { + printk (OUT_OF_MEM); + return -EINVAL; + } + return 0; +} + +static void free_disk_sb (mdk_rdev_t * rdev) +{ + if (rdev->sb) { + free_page((unsigned long) rdev->sb); + rdev->sb = NULL; + rdev->sb_offset = 0; + rdev->size = 0; + } else { + if (!rdev->faulty) + MD_BUG(); + } +} + +static int read_disk_sb (mdk_rdev_t * rdev) +{ + int ret = -EINVAL; + struct buffer_head *bh = NULL; + kdev_t dev = rdev->dev; + mdp_super_t *sb; + unsigned long sb_offset; + + if (!rdev->sb) { + MD_BUG(); + goto abort; + } + + /* + * Calculate the position of the superblock, + * it's at the end of the disk + */ + sb_offset = calc_dev_sboffset(rdev->dev, rdev->mddev, 1); + rdev->sb_offset = sb_offset; + printk("(read) %s's sb offset: %ld", partition_name(dev), sb_offset); + fsync_dev(dev); + set_blocksize (dev, MD_SB_BYTES); + bh = bread (dev, sb_offset / MD_SB_BLOCKS, MD_SB_BYTES); + + if (bh) { + sb = (mdp_super_t *) bh->b_data; + memcpy (rdev->sb, sb, MD_SB_BYTES); + } else { + printk (NO_SB,partition_name(rdev->dev)); + goto abort; + } + printk(" [events: %08lx]\n", (unsigned long)rdev->sb->events_lo); + ret = 0; +abort: + if (bh) + brelse (bh); + return ret; +} + +static unsigned int calc_sb_csum (mdp_super_t * sb) +{ + unsigned int disk_csum, csum; + + disk_csum = sb->sb_csum; + sb->sb_csum = 0; + csum = csum_partial((void *)sb, MD_SB_BYTES, 0); + sb->sb_csum = disk_csum; + return csum; +} + +/* + * Check one RAID superblock for generic plausibility + */ + +static int check_disk_sb (mdk_rdev_t * rdev) +{ + mdp_super_t *sb; + int ret = -EINVAL; + + sb = rdev->sb; + if (!sb) { + MD_BUG(); + goto abort; + } + + if (sb->md_magic != MD_SB_MAGIC) { + printk (BAD_MAGIC, partition_name(rdev->dev)); + goto abort; + } + + if (sb->md_minor >= MAX_MD_DEVS) { + printk (BAD_MINOR, partition_name(rdev->dev), + sb->md_minor); + goto abort; + } + + if (calc_sb_csum(sb) != sb->sb_csum) + printk(BAD_CSUM, partition_name(rdev->dev)); + ret = 0; +abort: + return ret; +} + +static kdev_t dev_unit(kdev_t dev) +{ + unsigned int mask; + struct gendisk *hd = find_gendisk(dev); + + if (!hd) + return 0; + mask = ~((1 << hd->minor_shift) - 1); + + return MKDEV(MAJOR(dev), MINOR(dev) & mask); +} + +static mdk_rdev_t * match_dev_unit(mddev_t *mddev, kdev_t dev) +{ + struct md_list_head *tmp; + mdk_rdev_t *rdev; + + ITERATE_RDEV(mddev,rdev,tmp) + if (dev_unit(rdev->dev) == dev_unit(dev)) + return rdev; + + return NULL; +} + +static int match_mddev_units(mddev_t *mddev1, mddev_t *mddev2) +{ + struct md_list_head *tmp; + mdk_rdev_t *rdev; + + ITERATE_RDEV(mddev1,rdev,tmp) + if (match_dev_unit(mddev2, rdev->dev)) + return 1; + + return 0; +} + +static MD_LIST_HEAD(all_raid_disks); +static MD_LIST_HEAD(pending_raid_disks); + +static void bind_rdev_to_array (mdk_rdev_t * rdev, mddev_t * mddev) +{ + mdk_rdev_t *same_pdev; + + if (rdev->mddev) { + MD_BUG(); + return; + } + same_pdev = match_dev_unit(mddev, rdev->dev); + if (same_pdev) + printk( KERN_WARNING +"md%d: WARNING: %s appears to be on the same physical disk as %s. True\n" +" protection against single-disk failure might be compromised.\n", + mdidx(mddev), partition_name(rdev->dev), + partition_name(same_pdev->dev)); + + md_list_add(&rdev->same_set, &mddev->disks); + rdev->mddev = mddev; + mddev->nb_dev++; + printk("md: bind<%s,%d>\n", partition_name(rdev->dev), mddev->nb_dev); +} + +static void unbind_rdev_from_array (mdk_rdev_t * rdev) +{ + if (!rdev->mddev) { + MD_BUG(); + return; + } + md_list_del(&rdev->same_set); + MD_INIT_LIST_HEAD(&rdev->same_set); + rdev->mddev->nb_dev--; + printk("md: unbind<%s,%d>\n", partition_name(rdev->dev), + rdev->mddev->nb_dev); + rdev->mddev = NULL; +} + +/* + * prevent the device from being mounted, repartitioned or + * otherwise reused by a RAID array (or any other kernel + * subsystem), by opening the device. [simply getting an + * inode is not enough, the SCSI module usage code needs + * an explicit open() on the device] + */ +static int lock_rdev (mdk_rdev_t *rdev) +{ + int err = 0; + struct block_device *bdev; + + bdev = bdget(rdev->dev); + if (bdev == NULL) + return -ENOMEM; + err = blkdev_get(bdev, FMODE_READ|FMODE_WRITE, 0, BDEV_FILE); + if (!err) { + rdev->bdev = bdev; + } + return err; +} + +static void unlock_rdev (mdk_rdev_t *rdev) +{ + if (!rdev->bdev) + MD_BUG(); + blkdev_put(rdev->bdev, BDEV_FILE); + bdput(rdev->bdev); + rdev->bdev = NULL; +} + +void md_autodetect_dev (kdev_t dev); + +static void export_rdev (mdk_rdev_t * rdev) +{ + printk("md: export_rdev(%s)\n",partition_name(rdev->dev)); + if (rdev->mddev) + MD_BUG(); + unlock_rdev(rdev); + free_disk_sb(rdev); + md_list_del(&rdev->all); + MD_INIT_LIST_HEAD(&rdev->all); + if (rdev->pending.next != &rdev->pending) { + printk("md: (%s was pending)\n",partition_name(rdev->dev)); + md_list_del(&rdev->pending); + MD_INIT_LIST_HEAD(&rdev->pending); + } +#ifndef MODULE + md_autodetect_dev(rdev->dev); +#endif + rdev->dev = 0; + rdev->faulty = 0; + kfree(rdev); +} + +static void kick_rdev_from_array (mdk_rdev_t * rdev) +{ + unbind_rdev_from_array(rdev); + export_rdev(rdev); +} + +static void export_array (mddev_t *mddev) +{ + struct md_list_head *tmp; + mdk_rdev_t *rdev; + mdp_super_t *sb = mddev->sb; + + if (mddev->sb) { + mddev->sb = NULL; + free_page((unsigned long) sb); + } + + ITERATE_RDEV(mddev,rdev,tmp) { + if (!rdev->mddev) { + MD_BUG(); + continue; + } + kick_rdev_from_array(rdev); + } + if (mddev->nb_dev) + MD_BUG(); +} + +static void free_mddev (mddev_t *mddev) +{ + if (!mddev) { + MD_BUG(); + return; + } + + export_array(mddev); + md_size[mdidx(mddev)] = 0; + md_hd_struct[mdidx(mddev)].nr_sects = 0; + + /* + * Make sure nobody else is using this mddev + * (careful, we rely on the global kernel lock here) + */ + while (md_atomic_read(&mddev->resync_sem.count) != 1) + schedule(); + while (md_atomic_read(&mddev->recovery_sem.count) != 1) + schedule(); + + del_mddev_mapping(mddev, MKDEV(MD_MAJOR, mdidx(mddev))); + md_list_del(&mddev->all_mddevs); + MD_INIT_LIST_HEAD(&mddev->all_mddevs); + kfree(mddev); + MOD_DEC_USE_COUNT; +} + +#undef BAD_CSUM +#undef BAD_MAGIC +#undef OUT_OF_MEM +#undef NO_SB + +static void print_desc(mdp_disk_t *desc) +{ + printk(" DISK\n", desc->number, + partition_name(MKDEV(desc->major,desc->minor)), + desc->major,desc->minor,desc->raid_disk,desc->state); +} + +static void print_sb(mdp_super_t *sb) +{ + int i; + + printk("md: SB: (V:%d.%d.%d) ID:<%08x.%08x.%08x.%08x> CT:%08x\n", + sb->major_version, sb->minor_version, sb->patch_version, + sb->set_uuid0, sb->set_uuid1, sb->set_uuid2, sb->set_uuid3, + sb->ctime); + printk("md: L%d S%08d ND:%d RD:%d md%d LO:%d CS:%d\n", sb->level, + sb->size, sb->nr_disks, sb->raid_disks, sb->md_minor, + sb->layout, sb->chunk_size); + printk("md: UT:%08x ST:%d AD:%d WD:%d FD:%d SD:%d CSUM:%08x E:%08lx\n", + sb->utime, sb->state, sb->active_disks, sb->working_disks, + sb->failed_disks, sb->spare_disks, + sb->sb_csum, (unsigned long)sb->events_lo); + + for (i = 0; i < MD_SB_DISKS; i++) { + mdp_disk_t *desc; + + desc = sb->disks + i; + printk("md: D %2d: ", i); + print_desc(desc); + } + printk("md: THIS: "); + print_desc(&sb->this_disk); + +} + +static void print_rdev(mdk_rdev_t *rdev) +{ + printk("md: rdev %s: O:%s, SZ:%08ld F:%d DN:%d ", + partition_name(rdev->dev), partition_name(rdev->old_dev), + rdev->size, rdev->faulty, rdev->desc_nr); + if (rdev->sb) { + printk("md: rdev superblock:\n"); + print_sb(rdev->sb); + } else + printk("md: no rdev superblock!\n"); +} + +void md_print_devices (void) +{ + struct md_list_head *tmp, *tmp2; + mdk_rdev_t *rdev; + mddev_t *mddev; + + printk("\n"); + printk("md: **********************************\n"); + printk("md: * *\n"); + printk("md: **********************************\n"); + ITERATE_MDDEV(mddev,tmp) { + printk("md%d: ", mdidx(mddev)); + + ITERATE_RDEV(mddev,rdev,tmp2) + printk("<%s>", partition_name(rdev->dev)); + + if (mddev->sb) { + printk(" array superblock:\n"); + print_sb(mddev->sb); + } else + printk(" no array superblock.\n"); + + ITERATE_RDEV(mddev,rdev,tmp2) + print_rdev(rdev); + } + printk("md: **********************************\n"); + printk("\n"); +} + +static int sb_equal ( mdp_super_t *sb1, mdp_super_t *sb2) +{ + int ret; + mdp_super_t *tmp1, *tmp2; + + tmp1 = kmalloc(sizeof(*tmp1),GFP_KERNEL); + tmp2 = kmalloc(sizeof(*tmp2),GFP_KERNEL); + + if (!tmp1 || !tmp2) { + ret = 0; + goto abort; + } + + *tmp1 = *sb1; + *tmp2 = *sb2; + + /* + * nr_disks is not constant + */ + tmp1->nr_disks = 0; + tmp2->nr_disks = 0; + + if (memcmp(tmp1, tmp2, MD_SB_GENERIC_CONSTANT_WORDS * 4)) + ret = 0; + else + ret = 1; + +abort: + if (tmp1) + kfree(tmp1); + if (tmp2) + kfree(tmp2); + + return ret; +} + +static int uuid_equal(mdk_rdev_t *rdev1, mdk_rdev_t *rdev2) +{ + if ( (rdev1->sb->set_uuid0 == rdev2->sb->set_uuid0) && + (rdev1->sb->set_uuid1 == rdev2->sb->set_uuid1) && + (rdev1->sb->set_uuid2 == rdev2->sb->set_uuid2) && + (rdev1->sb->set_uuid3 == rdev2->sb->set_uuid3)) + + return 1; + + return 0; +} + +static mdk_rdev_t * find_rdev_all (kdev_t dev) +{ + struct md_list_head *tmp; + mdk_rdev_t *rdev; + + tmp = all_raid_disks.next; + while (tmp != &all_raid_disks) { + rdev = md_list_entry(tmp, mdk_rdev_t, all); + if (rdev->dev == dev) + return rdev; + tmp = tmp->next; + } + return NULL; +} + +#define GETBLK_FAILED KERN_ERR \ +"md: getblk failed for device %s\n" + +static int write_disk_sb(mdk_rdev_t * rdev) +{ + struct buffer_head *bh; + kdev_t dev; + unsigned long sb_offset, size; + mdp_super_t *sb; + + if (!rdev->sb) { + MD_BUG(); + return 1; + } + if (rdev->faulty) { + MD_BUG(); + return 1; + } + if (rdev->sb->md_magic != MD_SB_MAGIC) { + MD_BUG(); + return 1; + } + + dev = rdev->dev; + sb_offset = calc_dev_sboffset(dev, rdev->mddev, 1); + if (rdev->sb_offset != sb_offset) { + printk("%s's sb offset has changed from %ld to %ld, skipping\n", partition_name(dev), rdev->sb_offset, sb_offset); + goto skip; + } + /* + * If the disk went offline meanwhile and it's just a spare, then + * it's size has changed to zero silently, and the MD code does + * not yet know that it's faulty. + */ + size = calc_dev_size(dev, rdev->mddev, 1); + if (size != rdev->size) { + printk("%s's size has changed from %ld to %ld since import, skipping\n", partition_name(dev), rdev->size, size); + goto skip; + } + + printk("(write) %s's sb offset: %ld\n", partition_name(dev), sb_offset); + fsync_dev(dev); + set_blocksize(dev, MD_SB_BYTES); + bh = getblk(dev, sb_offset / MD_SB_BLOCKS, MD_SB_BYTES); + if (!bh) { + printk(GETBLK_FAILED, partition_name(dev)); + return 1; + } + memset(bh->b_data,0,bh->b_size); + sb = (mdp_super_t *) bh->b_data; + memcpy(sb, rdev->sb, MD_SB_BYTES); + + mark_buffer_uptodate(bh, 1); + mark_buffer_dirty(bh); + ll_rw_block(WRITE, 1, &bh); + wait_on_buffer(bh); + brelse(bh); + fsync_dev(dev); +skip: + return 0; +} +#undef GETBLK_FAILED + +static void set_this_disk(mddev_t *mddev, mdk_rdev_t *rdev) +{ + int i, ok = 0; + mdp_disk_t *desc; + + for (i = 0; i < MD_SB_DISKS; i++) { + desc = mddev->sb->disks + i; +#if 0 + if (disk_faulty(desc)) { + if (MKDEV(desc->major,desc->minor) == rdev->dev) + ok = 1; + continue; + } +#endif + if (MKDEV(desc->major,desc->minor) == rdev->dev) { + rdev->sb->this_disk = *desc; + rdev->desc_nr = desc->number; + ok = 1; + break; + } + } + + if (!ok) { + MD_BUG(); + } +} + +static int sync_sbs(mddev_t * mddev) +{ + mdk_rdev_t *rdev; + mdp_super_t *sb; + struct md_list_head *tmp; + + ITERATE_RDEV(mddev,rdev,tmp) { + if (rdev->faulty) + continue; + sb = rdev->sb; + *sb = *mddev->sb; + set_this_disk(mddev, rdev); + sb->sb_csum = calc_sb_csum(sb); + } + return 0; +} + +int md_update_sb(mddev_t * mddev) +{ + int err, count = 100; + struct md_list_head *tmp; + mdk_rdev_t *rdev; + +repeat: + mddev->sb->utime = CURRENT_TIME; + if ((++mddev->sb->events_lo)==0) + ++mddev->sb->events_hi; + + if ((mddev->sb->events_lo|mddev->sb->events_hi)==0) { + /* + * oops, this 64-bit counter should never wrap. + * Either we are in around ~1 trillion A.C., assuming + * 1 reboot per second, or we have a bug: + */ + MD_BUG(); + mddev->sb->events_lo = mddev->sb->events_hi = 0xffffffff; + } + sync_sbs(mddev); + + /* + * do not write anything to disk if using + * nonpersistent superblocks + */ + if (mddev->sb->not_persistent) + return 0; + + printk(KERN_INFO "md: updating md%d RAID superblock on device\n", + mdidx(mddev)); + + err = 0; + ITERATE_RDEV(mddev,rdev,tmp) { + printk("md: "); + if (rdev->faulty) + printk("(skipping faulty "); + printk("%s ", partition_name(rdev->dev)); + if (!rdev->faulty) { + printk("[events: %08lx]", + (unsigned long)rdev->sb->events_lo); + err += write_disk_sb(rdev); + } else + printk(")\n"); + } + if (err) { + if (--count) { + printk("md: errors occurred during superblock update, repeating\n"); + goto repeat; + } + printk("md: excessive errors occurred during superblock update, exiting\n"); + } + return 0; +} + +/* + * Import a device. If 'on_disk', then sanity check the superblock + * + * mark the device faulty if: + * + * - the device is nonexistent (zero size) + * - the device has no valid superblock + * + * a faulty rdev _never_ has rdev->sb set. + */ +static int md_import_device (kdev_t newdev, int on_disk) +{ + int err; + mdk_rdev_t *rdev; + unsigned int size; + + if (find_rdev_all(newdev)) + return -EEXIST; + + rdev = (mdk_rdev_t *) kmalloc(sizeof(*rdev), GFP_KERNEL); + if (!rdev) { + printk("md: could not alloc mem for %s!\n", partition_name(newdev)); + return -ENOMEM; + } + memset(rdev, 0, sizeof(*rdev)); + + if (is_mounted(newdev)) { + printk("md: can not import %s, has active inodes!\n", + partition_name(newdev)); + err = -EBUSY; + goto abort_free; + } + + if ((err = alloc_disk_sb(rdev))) + goto abort_free; + + rdev->dev = newdev; + if (lock_rdev(rdev)) { + printk("md: could not lock %s, zero-size? Marking faulty.\n", + partition_name(newdev)); + err = -EINVAL; + goto abort_free; + } + rdev->desc_nr = -1; + rdev->faulty = 0; + + size = 0; + if (blk_size[MAJOR(newdev)]) + size = blk_size[MAJOR(newdev)][MINOR(newdev)]; + if (!size) { + printk("md: %s has zero size, marking faulty!\n", + partition_name(newdev)); + err = -EINVAL; + goto abort_free; + } + + if (on_disk) { + if ((err = read_disk_sb(rdev))) { + printk("md: could not read %s's sb, not importing!\n", + partition_name(newdev)); + goto abort_free; + } + if ((err = check_disk_sb(rdev))) { + printk("md: %s has invalid sb, not importing!\n", + partition_name(newdev)); + goto abort_free; + } + + rdev->old_dev = MKDEV(rdev->sb->this_disk.major, + rdev->sb->this_disk.minor); + rdev->desc_nr = rdev->sb->this_disk.number; + } + md_list_add(&rdev->all, &all_raid_disks); + MD_INIT_LIST_HEAD(&rdev->pending); + + if (rdev->faulty && rdev->sb) + free_disk_sb(rdev); + return 0; + +abort_free: + if (rdev->sb) { + if (rdev->bdev) + unlock_rdev(rdev); + free_disk_sb(rdev); + } + kfree(rdev); + return err; +} + +/* + * Check a full RAID array for plausibility + */ + +#define INCONSISTENT KERN_ERR \ +"md: fatal superblock inconsistency in %s -- removing from array\n" + +#define OUT_OF_DATE KERN_ERR \ +"md: superblock update time inconsistency -- using the most recent one\n" + +#define OLD_VERSION KERN_ALERT \ +"md: md%d: unsupported raid array version %d.%d.%d\n" + +#define NOT_CLEAN_IGNORE KERN_ERR \ +"md: md%d: raid array is not clean -- starting background reconstruction\n" + +#define UNKNOWN_LEVEL KERN_ERR \ +"md: md%d: unsupported raid level %d\n" + +static int analyze_sbs (mddev_t * mddev) +{ + int out_of_date = 0, i; + struct md_list_head *tmp, *tmp2; + mdk_rdev_t *rdev, *rdev2, *freshest; + mdp_super_t *sb; + + /* + * Verify the RAID superblock on each real device + */ + ITERATE_RDEV(mddev,rdev,tmp) { + if (rdev->faulty) { + MD_BUG(); + goto abort; + } + if (!rdev->sb) { + MD_BUG(); + goto abort; + } + if (check_disk_sb(rdev)) + goto abort; + } + + /* + * The superblock constant part has to be the same + * for all disks in the array. + */ + sb = NULL; + + ITERATE_RDEV(mddev,rdev,tmp) { + if (!sb) { + sb = rdev->sb; + continue; + } + if (!sb_equal(sb, rdev->sb)) { + printk (INCONSISTENT, partition_name(rdev->dev)); + kick_rdev_from_array(rdev); + continue; + } + } + + /* + * OK, we have all disks and the array is ready to run. Let's + * find the freshest superblock, that one will be the superblock + * that represents the whole array. + */ + if (!mddev->sb) + if (alloc_array_sb(mddev)) + goto abort; + sb = mddev->sb; + freshest = NULL; + + ITERATE_RDEV(mddev,rdev,tmp) { + __u64 ev1, ev2; + /* + * if the checksum is invalid, use the superblock + * only as a last resort. (decrease it's age by + * one event) + */ + if (calc_sb_csum(rdev->sb) != rdev->sb->sb_csum) { + if (rdev->sb->events_lo || rdev->sb->events_hi) + if ((rdev->sb->events_lo--)==0) + rdev->sb->events_hi--; + } + + printk("md: %s's event counter: %08lx\n", partition_name(rdev->dev), + (unsigned long)rdev->sb->events_lo); + if (!freshest) { + freshest = rdev; + continue; + } + /* + * Find the newest superblock version + */ + ev1 = md_event(rdev->sb); + ev2 = md_event(freshest->sb); + if (ev1 != ev2) { + out_of_date = 1; + if (ev1 > ev2) + freshest = rdev; + } + } + if (out_of_date) { + printk(OUT_OF_DATE); + printk("md: freshest: %s\n", partition_name(freshest->dev)); + } + memcpy (sb, freshest->sb, sizeof(*sb)); + + /* + * at this point we have picked the 'best' superblock + * from all available superblocks. + * now we validate this superblock and kick out possibly + * failed disks. + */ + ITERATE_RDEV(mddev,rdev,tmp) { + /* + * Kick all non-fresh devices faulty + */ + __u64 ev1, ev2; + ev1 = md_event(rdev->sb); + ev2 = md_event(sb); + ++ev1; + if (ev1 < ev2) { + printk("md: kicking non-fresh %s from array!\n", + partition_name(rdev->dev)); + kick_rdev_from_array(rdev); + continue; + } + } + + /* + * Fix up changed device names ... but only if this disk has a + * recent update time. Use faulty checksum ones too. + */ + ITERATE_RDEV(mddev,rdev,tmp) { + __u64 ev1, ev2, ev3; + if (rdev->faulty) { /* REMOVEME */ + MD_BUG(); + goto abort; + } + ev1 = md_event(rdev->sb); + ev2 = md_event(sb); + ev3 = ev2; + --ev3; + if ((rdev->dev != rdev->old_dev) && + ((ev1 == ev2) || (ev1 == ev3))) { + mdp_disk_t *desc; + + printk("md: device name has changed from %s to %s since last import!\n", partition_name(rdev->old_dev), partition_name(rdev->dev)); + if (rdev->desc_nr == -1) { + MD_BUG(); + goto abort; + } + desc = &sb->disks[rdev->desc_nr]; + if (rdev->old_dev != MKDEV(desc->major, desc->minor)) { + MD_BUG(); + goto abort; + } + desc->major = MAJOR(rdev->dev); + desc->minor = MINOR(rdev->dev); + desc = &rdev->sb->this_disk; + desc->major = MAJOR(rdev->dev); + desc->minor = MINOR(rdev->dev); + } + } + + /* + * Remove unavailable and faulty devices ... + * + * note that if an array becomes completely unrunnable due to + * missing devices, we do not write the superblock back, so the + * administrator has a chance to fix things up. The removal thus + * only happens if it's nonfatal to the contents of the array. + */ + for (i = 0; i < MD_SB_DISKS; i++) { + int found; + mdp_disk_t *desc; + kdev_t dev; + + desc = sb->disks + i; + dev = MKDEV(desc->major, desc->minor); + + /* + * We kick faulty devices/descriptors immediately. + */ + if (disk_faulty(desc)) { + found = 0; + ITERATE_RDEV(mddev,rdev,tmp) { + if (rdev->desc_nr != desc->number) + continue; + printk("md%d: kicking faulty %s!\n", + mdidx(mddev),partition_name(rdev->dev)); + kick_rdev_from_array(rdev); + found = 1; + break; + } + if (!found) { + if (dev == MKDEV(0,0)) + continue; + printk("md%d: removing former faulty %s!\n", + mdidx(mddev), partition_name(dev)); + } + remove_descriptor(desc, sb); + continue; + } + + if (dev == MKDEV(0,0)) + continue; + /* + * Is this device present in the rdev ring? + */ + found = 0; + ITERATE_RDEV(mddev,rdev,tmp) { + if (rdev->desc_nr == desc->number) { + found = 1; + break; + } + } + if (found) + continue; + + printk("md%d: former device %s is unavailable, removing from array!\n", mdidx(mddev), partition_name(dev)); + remove_descriptor(desc, sb); + } + + /* + * Double check wether all devices mentioned in the + * superblock are in the rdev ring. + */ + for (i = 0; i < MD_SB_DISKS; i++) { + mdp_disk_t *desc; + kdev_t dev; + + desc = sb->disks + i; + dev = MKDEV(desc->major, desc->minor); + + if (dev == MKDEV(0,0)) + continue; + + if (disk_faulty(desc)) { + MD_BUG(); + goto abort; + } + + rdev = find_rdev(mddev, dev); + if (!rdev) { + MD_BUG(); + goto abort; + } + } + + /* + * Do a final reality check. + */ + ITERATE_RDEV(mddev,rdev,tmp) { + if (rdev->desc_nr == -1) { + MD_BUG(); + goto abort; + } + /* + * is the desc_nr unique? + */ + ITERATE_RDEV(mddev,rdev2,tmp2) { + if ((rdev2 != rdev) && + (rdev2->desc_nr == rdev->desc_nr)) { + MD_BUG(); + goto abort; + } + } + /* + * is the device unique? + */ + ITERATE_RDEV(mddev,rdev2,tmp2) { + if ((rdev2 != rdev) && + (rdev2->dev == rdev->dev)) { + MD_BUG(); + goto abort; + } + } + } + + /* + * Check if we can support this RAID array + */ + if (sb->major_version != MD_MAJOR_VERSION || + sb->minor_version > MD_MINOR_VERSION) { + + printk (OLD_VERSION, mdidx(mddev), sb->major_version, + sb->minor_version, sb->patch_version); + goto abort; + } + + if ((sb->state != (1 << MD_SB_CLEAN)) && ((sb->level == 1) || + (sb->level == 4) || (sb->level == 5))) + printk (NOT_CLEAN_IGNORE, mdidx(mddev)); + + return 0; +abort: + return 1; +} + +#undef INCONSISTENT +#undef OUT_OF_DATE +#undef OLD_VERSION +#undef OLD_LEVEL + +static int device_size_calculation (mddev_t * mddev) +{ + int data_disks = 0, persistent; + unsigned int readahead; + mdp_super_t *sb = mddev->sb; + struct md_list_head *tmp; + mdk_rdev_t *rdev; + + /* + * Do device size calculation. Bail out if too small. + * (we have to do this after having validated chunk_size, + * because device size has to be modulo chunk_size) + */ + persistent = !mddev->sb->not_persistent; + ITERATE_RDEV(mddev,rdev,tmp) { + if (rdev->faulty) + continue; + if (rdev->size) { + MD_BUG(); + continue; + } + rdev->size = calc_dev_size(rdev->dev, mddev, persistent); + if (rdev->size < sb->chunk_size / 1024) { + printk (KERN_WARNING + "md: Dev %s smaller than chunk_size: %ldk < %dk\n", + partition_name(rdev->dev), + rdev->size, sb->chunk_size / 1024); + return -EINVAL; + } + } + + switch (sb->level) { + case -3: + data_disks = 1; + break; + case -2: + data_disks = 1; + break; + case -1: + zoned_raid_size(mddev); + data_disks = 1; + break; + case 0: + zoned_raid_size(mddev); + data_disks = sb->raid_disks; + break; + case 1: + data_disks = 1; + break; + case 4: + case 5: + data_disks = sb->raid_disks-1; + break; + default: + printk (UNKNOWN_LEVEL, mdidx(mddev), sb->level); + goto abort; + } + if (!md_size[mdidx(mddev)]) + md_size[mdidx(mddev)] = sb->size * data_disks; + + readahead = MD_READAHEAD; + if ((sb->level == 0) || (sb->level == 4) || (sb->level == 5)) { + readahead = (mddev->sb->chunk_size>>READAHEAD_SHIFT) * 4 * data_disks; + if (readahead < data_disks * (MAX_SECTORS>>(READAHEAD_SHIFT-9))*2) + readahead = data_disks * (MAX_SECTORS>>(READAHEAD_SHIFT-9))*2; + } else { + if (sb->level == -3) + readahead = 0; + } + md_maxreadahead[mdidx(mddev)] = readahead; + + printk(KERN_INFO "md%d: max total readahead window set to %ldk\n", + mdidx(mddev), readahead*(READAHEAD_UNIT/1024)); + + printk(KERN_INFO + "md%d: %d data-disks, max readahead per data-disk: %ldk\n", + mdidx(mddev), data_disks, readahead/data_disks*(READAHEAD_UNIT/1024)); + return 0; +abort: + return 1; +} + + +#define TOO_BIG_CHUNKSIZE KERN_ERR \ +"too big chunk_size: %d > %d\n" + +#define TOO_SMALL_CHUNKSIZE KERN_ERR \ +"too small chunk_size: %d < %ld\n" + +#define BAD_CHUNKSIZE KERN_ERR \ +"no chunksize specified, see 'man raidtab'\n" + +static int do_md_run (mddev_t * mddev) +{ + int pnum, err; + int chunk_size; + struct md_list_head *tmp; + mdk_rdev_t *rdev; + + + if (!mddev->nb_dev) { + MD_BUG(); + return -EINVAL; + } + + if (mddev->pers) + return -EBUSY; + + /* + * Resize disks to align partitions size on a given + * chunk size. + */ + md_size[mdidx(mddev)] = 0; + + /* + * Analyze all RAID superblock(s) + */ + if (analyze_sbs(mddev)) { + MD_BUG(); + return -EINVAL; + } + + chunk_size = mddev->sb->chunk_size; + pnum = level_to_pers(mddev->sb->level); + + mddev->param.chunk_size = chunk_size; + mddev->param.personality = pnum; + + if (chunk_size > MAX_CHUNK_SIZE) { + printk(TOO_BIG_CHUNKSIZE, chunk_size, MAX_CHUNK_SIZE); + return -EINVAL; + } + /* + * chunk-size has to be a power of 2 and multiples of PAGE_SIZE + */ + if ( (1 << ffz(~chunk_size)) != chunk_size) { + MD_BUG(); + return -EINVAL; + } + if (chunk_size < PAGE_SIZE) { + printk(TOO_SMALL_CHUNKSIZE, chunk_size, PAGE_SIZE); + return -EINVAL; + } + + if (pnum >= MAX_PERSONALITY) { + MD_BUG(); + return -EINVAL; + } + + if ((pnum != RAID1) && (pnum != LINEAR) && !chunk_size) { + /* + * 'default chunksize' in the old md code used to + * be PAGE_SIZE, baaad. + * we abort here to be on the safe side. We dont + * want to continue the bad practice. + */ + printk(BAD_CHUNKSIZE); + return -EINVAL; + } + + if (!pers[pnum]) + { +#ifdef CONFIG_KMOD + char module_name[80]; + sprintf (module_name, "md-personality-%d", pnum); + request_module (module_name); + if (!pers[pnum]) +#endif + return -EINVAL; + } + + if (device_size_calculation(mddev)) + return -EINVAL; + + /* + * Drop all container device buffers, from now on + * the only valid external interface is through the md + * device. + * Also find largest hardsector size + */ + md_hardsect_sizes[mdidx(mddev)] = 512; + ITERATE_RDEV(mddev,rdev,tmp) { + if (rdev->faulty) + continue; + invalidate_device(rdev->dev, 1); + if (get_hardsect_size(rdev->dev) + > md_hardsect_sizes[mdidx(mddev)]) + md_hardsect_sizes[mdidx(mddev)] = + get_hardsect_size(rdev->dev); + } + md_blocksizes[mdidx(mddev)] = 1024; + if (md_blocksizes[mdidx(mddev)] < md_hardsect_sizes[mdidx(mddev)]) + md_blocksizes[mdidx(mddev)] = md_hardsect_sizes[mdidx(mddev)]; + mddev->pers = pers[pnum]; + + err = mddev->pers->run(mddev); + if (err) { + printk("md: pers->run() failed ...\n"); + mddev->pers = NULL; + return -EINVAL; + } + + mddev->sb->state &= ~(1 << MD_SB_CLEAN); + md_update_sb(mddev); + + /* + * md_size has units of 1K blocks, which are + * twice as large as sectors. + */ + md_hd_struct[mdidx(mddev)].start_sect = 0; + register_disk(&md_gendisk, MKDEV(MAJOR_NR,mdidx(mddev)), + 1, &md_fops, md_size[mdidx(mddev)]<<1); + + read_ahead[MD_MAJOR] = 1024; + return (0); +} + +#undef TOO_BIG_CHUNKSIZE +#undef BAD_CHUNKSIZE + +#define OUT(x) do { err = (x); goto out; } while (0) + +static int restart_array (mddev_t *mddev) +{ + int err = 0; + + /* + * Complain if it has no devices + */ + if (!mddev->nb_dev) + OUT(-ENXIO); + + if (mddev->pers) { + if (!mddev->ro) + OUT(-EBUSY); + + mddev->ro = 0; + set_device_ro(mddev_to_kdev(mddev), 0); + + printk (KERN_INFO + "md: md%d switched to read-write mode.\n", mdidx(mddev)); + /* + * Kick recovery or resync if necessary + */ + md_recover_arrays(); + if (mddev->pers->restart_resync) + mddev->pers->restart_resync(mddev); + } else + err = -EINVAL; + +out: + return err; +} + +#define STILL_MOUNTED KERN_WARNING \ +"md: md%d still mounted.\n" +#define STILL_IN_USE \ +"md: md%d still in use.\n" + +static int do_md_stop (mddev_t * mddev, int ro) +{ + int err = 0, resync_interrupted = 0; + kdev_t dev = mddev_to_kdev(mddev); + + if (atomic_read(&mddev->active)>1) { + printk(STILL_IN_USE, mdidx(mddev)); + OUT(-EBUSY); + } + + if (mddev->pers) { + /* + * It is safe to call stop here, it only frees private + * data. Also, it tells us if a device is unstoppable + * (eg. resyncing is in progress) + */ + if (mddev->pers->stop_resync) + if (mddev->pers->stop_resync(mddev)) + resync_interrupted = 1; + + if (mddev->recovery_running) + md_interrupt_thread(md_recovery_thread); + + /* + * This synchronizes with signal delivery to the + * resync or reconstruction thread. It also nicely + * hangs the process if some reconstruction has not + * finished. + */ + down(&mddev->recovery_sem); + up(&mddev->recovery_sem); + + invalidate_device(dev, 1); + + if (ro) { + if (mddev->ro) + OUT(-ENXIO); + mddev->ro = 1; + } else { + if (mddev->ro) + set_device_ro(dev, 0); + if (mddev->pers->stop(mddev)) { + if (mddev->ro) + set_device_ro(dev, 1); + OUT(-EBUSY); + } + if (mddev->ro) + mddev->ro = 0; + } + if (mddev->sb) { + /* + * mark it clean only if there was no resync + * interrupted. + */ + if (!mddev->recovery_running && !resync_interrupted) { + printk("md: marking sb clean...\n"); + mddev->sb->state |= 1 << MD_SB_CLEAN; + } + md_update_sb(mddev); + } + if (ro) + set_device_ro(dev, 1); + } + + /* + * Free resources if final stop + */ + if (!ro) { + printk (KERN_INFO "md: md%d stopped.\n", mdidx(mddev)); + free_mddev(mddev); + + } else + printk (KERN_INFO + "md: md%d switched to read-only mode.\n", mdidx(mddev)); +out: + return err; +} + +#undef OUT + +/* + * We have to safely support old arrays too. + */ +int detect_old_array (mdp_super_t *sb) +{ + if (sb->major_version > 0) + return 0; + if (sb->minor_version >= 90) + return 0; + + return -EINVAL; +} + + +static void autorun_array (mddev_t *mddev) +{ + mdk_rdev_t *rdev; + struct md_list_head *tmp; + int err; + + if (mddev->disks.prev == &mddev->disks) { + MD_BUG(); + return; + } + + printk("md: running: "); + + ITERATE_RDEV(mddev,rdev,tmp) { + printk("<%s>", partition_name(rdev->dev)); + } + printk("\nmd: now!\n"); + + err = do_md_run (mddev); + if (err) { + printk("md :do_md_run() returned %d\n", err); + /* + * prevent the writeback of an unrunnable array + */ + mddev->sb_dirty = 0; + do_md_stop (mddev, 0); + } +} + +/* + * lets try to run arrays based on all disks that have arrived + * until now. (those are in the ->pending list) + * + * the method: pick the first pending disk, collect all disks with + * the same UUID, remove all from the pending list and put them into + * the 'same_array' list. Then order this list based on superblock + * update time (freshest comes first), kick out 'old' disks and + * compare superblocks. If everything's fine then run it. + * + * If "unit" is allocated, then bump its reference count + */ +static void autorun_devices (kdev_t countdev) +{ + struct md_list_head candidates; + struct md_list_head *tmp; + mdk_rdev_t *rdev0, *rdev; + mddev_t *mddev; + kdev_t md_kdev; + + + printk("md: autorun ...\n"); + while (pending_raid_disks.next != &pending_raid_disks) { + rdev0 = md_list_entry(pending_raid_disks.next, + mdk_rdev_t, pending); + + printk("md: considering %s ...\n", partition_name(rdev0->dev)); + MD_INIT_LIST_HEAD(&candidates); + ITERATE_RDEV_PENDING(rdev,tmp) { + if (uuid_equal(rdev0, rdev)) { + if (!sb_equal(rdev0->sb, rdev->sb)) { + printk("md: %s has same UUID as %s, but superblocks differ ...\n", partition_name(rdev->dev), partition_name(rdev0->dev)); + continue; + } + printk("md: adding %s ...\n", partition_name(rdev->dev)); + md_list_del(&rdev->pending); + md_list_add(&rdev->pending, &candidates); + } + } + /* + * now we have a set of devices, with all of them having + * mostly sane superblocks. It's time to allocate the + * mddev. + */ + md_kdev = MKDEV(MD_MAJOR, rdev0->sb->md_minor); + mddev = kdev_to_mddev(md_kdev); + if (mddev) { + printk("md: md%d already running, cannot run %s\n", + mdidx(mddev), partition_name(rdev0->dev)); + ITERATE_RDEV_GENERIC(candidates,pending,rdev,tmp) + export_rdev(rdev); + continue; + } + mddev = alloc_mddev(md_kdev); + if (mddev == NULL) { + printk("md: cannot allocate memory for md drive.\n"); + break; + } + if (md_kdev == countdev) + atomic_inc(&mddev->active); + printk("md: created md%d\n", mdidx(mddev)); + ITERATE_RDEV_GENERIC(candidates,pending,rdev,tmp) { + bind_rdev_to_array(rdev, mddev); + md_list_del(&rdev->pending); + MD_INIT_LIST_HEAD(&rdev->pending); + } + autorun_array(mddev); + } + printk("md: ... autorun DONE.\n"); +} + +/* + * import RAID devices based on one partition + * if possible, the array gets run as well. + */ + +#define BAD_VERSION KERN_ERR \ +"md: %s has RAID superblock version 0.%d, autodetect needs v0.90 or higher\n" + +#define OUT_OF_MEM KERN_ALERT \ +"md: out of memory.\n" + +#define NO_DEVICE KERN_ERR \ +"md: disabled device %s\n" + +#define AUTOADD_FAILED KERN_ERR \ +"md: auto-adding devices to md%d FAILED (error %d).\n" + +#define AUTOADD_FAILED_USED KERN_ERR \ +"md: cannot auto-add device %s to md%d, already used.\n" + +#define AUTORUN_FAILED KERN_ERR \ +"md: auto-running md%d FAILED (error %d).\n" + +#define MDDEV_BUSY KERN_ERR \ +"md: cannot auto-add to md%d, already running.\n" + +#define AUTOADDING KERN_INFO \ +"md: auto-adding devices to md%d, based on %s's superblock.\n" + +#define AUTORUNNING KERN_INFO \ +"md: auto-running md%d.\n" + +static int autostart_array (kdev_t startdev, kdev_t countdev) +{ + int err = -EINVAL, i; + mdp_super_t *sb = NULL; + mdk_rdev_t *start_rdev = NULL, *rdev; + + if (md_import_device(startdev, 1)) { + printk("md: could not import %s!\n", partition_name(startdev)); + goto abort; + } + + start_rdev = find_rdev_all(startdev); + if (!start_rdev) { + MD_BUG(); + goto abort; + } + if (start_rdev->faulty) { + printk("md: can not autostart based on faulty %s!\n", + partition_name(startdev)); + goto abort; + } + md_list_add(&start_rdev->pending, &pending_raid_disks); + + sb = start_rdev->sb; + + err = detect_old_array(sb); + if (err) { + printk("md: array version is too old to be autostarted, use raidtools 0.90 mkraid --upgrade\nto upgrade the array without data loss!\n"); + goto abort; + } + + for (i = 0; i < MD_SB_DISKS; i++) { + mdp_disk_t *desc; + kdev_t dev; + + desc = sb->disks + i; + dev = MKDEV(desc->major, desc->minor); + + if (dev == MKDEV(0,0)) + continue; + if (dev == startdev) + continue; + if (md_import_device(dev, 1)) { + printk("md: could not import %s, trying to run array nevertheless.\n", partition_name(dev)); + continue; + } + rdev = find_rdev_all(dev); + if (!rdev) { + MD_BUG(); + goto abort; + } + md_list_add(&rdev->pending, &pending_raid_disks); + } + + /* + * possibly return codes + */ + autorun_devices(countdev); + return 0; + +abort: + if (start_rdev) + export_rdev(start_rdev); + return err; +} + +#undef BAD_VERSION +#undef OUT_OF_MEM +#undef NO_DEVICE +#undef AUTOADD_FAILED_USED +#undef AUTOADD_FAILED +#undef AUTORUN_FAILED +#undef AUTOADDING +#undef AUTORUNNING + + +static int get_version (void * arg) +{ + mdu_version_t ver; + + ver.major = MD_MAJOR_VERSION; + ver.minor = MD_MINOR_VERSION; + ver.patchlevel = MD_PATCHLEVEL_VERSION; + + if (md_copy_to_user(arg, &ver, sizeof(ver))) + return -EFAULT; + + return 0; +} + +#define SET_FROM_SB(x) info.x = mddev->sb->x +static int get_array_info (mddev_t * mddev, void * arg) +{ + mdu_array_info_t info; + + if (!mddev->sb) + return -EINVAL; + + SET_FROM_SB(major_version); + SET_FROM_SB(minor_version); + SET_FROM_SB(patch_version); + SET_FROM_SB(ctime); + SET_FROM_SB(level); + SET_FROM_SB(size); + SET_FROM_SB(nr_disks); + SET_FROM_SB(raid_disks); + SET_FROM_SB(md_minor); + SET_FROM_SB(not_persistent); + + SET_FROM_SB(utime); + SET_FROM_SB(state); + SET_FROM_SB(active_disks); + SET_FROM_SB(working_disks); + SET_FROM_SB(failed_disks); + SET_FROM_SB(spare_disks); + + SET_FROM_SB(layout); + SET_FROM_SB(chunk_size); + + if (md_copy_to_user(arg, &info, sizeof(info))) + return -EFAULT; + + return 0; +} +#undef SET_FROM_SB + +#define SET_FROM_SB(x) info.x = mddev->sb->disks[nr].x +static int get_disk_info (mddev_t * mddev, void * arg) +{ + mdu_disk_info_t info; + unsigned int nr; + + if (!mddev->sb) + return -EINVAL; + + if (md_copy_from_user(&info, arg, sizeof(info))) + return -EFAULT; + + nr = info.number; + if (nr >= mddev->sb->raid_disks+mddev->sb->spare_disks) + return -EINVAL; + + SET_FROM_SB(major); + SET_FROM_SB(minor); + SET_FROM_SB(raid_disk); + SET_FROM_SB(state); + + if (md_copy_to_user(arg, &info, sizeof(info))) + return -EFAULT; + + return 0; +} +#undef SET_FROM_SB + +#define SET_SB(x) mddev->sb->disks[nr].x = info->x + +static int add_new_disk (mddev_t * mddev, mdu_disk_info_t *info) +{ + int err, size, persistent; + mdk_rdev_t *rdev; + unsigned int nr; + kdev_t dev; + dev = MKDEV(info->major,info->minor); + + if (find_rdev_all(dev)) { + printk("md: device %s already used in a RAID array!\n", + partition_name(dev)); + return -EBUSY; + } + if (!mddev->sb) { + /* expecting a device which has a superblock */ + err = md_import_device(dev, 1); + if (err) { + printk("md: md_import_device returned %d\n", err); + return -EINVAL; + } + rdev = find_rdev_all(dev); + if (!rdev) { + MD_BUG(); + return -EINVAL; + } + if (mddev->nb_dev) { + mdk_rdev_t *rdev0 = md_list_entry(mddev->disks.next, + mdk_rdev_t, same_set); + if (!uuid_equal(rdev0, rdev)) { + printk("md: %s has different UUID to %s\n", partition_name(rdev->dev), partition_name(rdev0->dev)); + export_rdev(rdev); + return -EINVAL; + } + if (!sb_equal(rdev0->sb, rdev->sb)) { + printk("md: %s has same UUID but different superblock to %s\n", partition_name(rdev->dev), partition_name(rdev0->dev)); + export_rdev(rdev); + return -EINVAL; + } + } + bind_rdev_to_array(rdev, mddev); + return 0; + } + + nr = info->number; + if (nr >= mddev->sb->nr_disks) + return -EINVAL; + + SET_SB(number); + SET_SB(major); + SET_SB(minor); + SET_SB(raid_disk); + SET_SB(state); + + if ((info->state & (1<old_dev = dev; + rdev->desc_nr = info->number; + + bind_rdev_to_array(rdev, mddev); + + persistent = !mddev->sb->not_persistent; + if (!persistent) + printk("md: nonpersistent superblock ...\n"); + if (!mddev->sb->chunk_size) + printk("md: no chunksize?\n"); + + size = calc_dev_size(dev, mddev, persistent); + rdev->sb_offset = calc_dev_sboffset(dev, mddev, persistent); + + if (!mddev->sb->size || (mddev->sb->size > size)) + mddev->sb->size = size; + } + + /* + * sync all other superblocks with the main superblock + */ + sync_sbs(mddev); + + return 0; +} +#undef SET_SB + +static int hot_remove_disk (mddev_t * mddev, kdev_t dev) +{ + int err; + mdk_rdev_t *rdev; + mdp_disk_t *disk; + + if (!mddev->pers) + return -ENODEV; + + printk("md: trying to remove %s from md%d ... \n", + partition_name(dev), mdidx(mddev)); + + if (!mddev->pers->diskop) { + printk("md%d: personality does not support diskops!\n", + mdidx(mddev)); + return -EINVAL; + } + + rdev = find_rdev(mddev, dev); + if (!rdev) + return -ENXIO; + + if (rdev->desc_nr == -1) { + MD_BUG(); + return -EINVAL; + } + disk = &mddev->sb->disks[rdev->desc_nr]; + if (disk_active(disk)) + goto busy; + if (disk_removed(disk)) { + MD_BUG(); + return -EINVAL; + } + + err = mddev->pers->diskop(mddev, &disk, DISKOP_HOT_REMOVE_DISK); + if (err == -EBUSY) + goto busy; + if (err) { + MD_BUG(); + return -EINVAL; + } + + remove_descriptor(disk, mddev->sb); + kick_rdev_from_array(rdev); + mddev->sb_dirty = 1; + md_update_sb(mddev); + + return 0; +busy: + printk("md: cannot remove active disk %s from md%d ... \n", + partition_name(dev), mdidx(mddev)); + return -EBUSY; +} + +static int hot_add_disk (mddev_t * mddev, kdev_t dev) +{ + int i, err, persistent; + unsigned int size; + mdk_rdev_t *rdev; + mdp_disk_t *disk; + + if (!mddev->pers) + return -ENODEV; + + printk("md: trying to hot-add %s to md%d ... \n", + partition_name(dev), mdidx(mddev)); + + if (!mddev->pers->diskop) { + printk("md%d: personality does not support diskops!\n", + mdidx(mddev)); + return -EINVAL; + } + + persistent = !mddev->sb->not_persistent; + size = calc_dev_size(dev, mddev, persistent); + + if (size < mddev->sb->size) { + printk("md%d: disk size %d blocks < array size %d\n", + mdidx(mddev), size, mddev->sb->size); + return -ENOSPC; + } + + rdev = find_rdev(mddev, dev); + if (rdev) + return -EBUSY; + + err = md_import_device (dev, 0); + if (err) { + printk("md: error, md_import_device() returned %d\n", err); + return -EINVAL; + } + rdev = find_rdev_all(dev); + if (!rdev) { + MD_BUG(); + return -EINVAL; + } + if (rdev->faulty) { + printk("md: can not hot-add faulty %s disk to md%d!\n", + partition_name(dev), mdidx(mddev)); + err = -EINVAL; + goto abort_export; + } + bind_rdev_to_array(rdev, mddev); + + /* + * The rest should better be atomic, we can have disk failures + * noticed in interrupt contexts ... + */ + rdev->old_dev = dev; + rdev->size = size; + rdev->sb_offset = calc_dev_sboffset(dev, mddev, persistent); + + disk = mddev->sb->disks + mddev->sb->raid_disks; + for (i = mddev->sb->raid_disks; i < MD_SB_DISKS; i++) { + disk = mddev->sb->disks + i; + + if (!disk->major && !disk->minor) + break; + if (disk_removed(disk)) + break; + } + if (i == MD_SB_DISKS) { + printk("md%d: can not hot-add to full array!\n", mdidx(mddev)); + err = -EBUSY; + goto abort_unbind_export; + } + + if (disk_removed(disk)) { + /* + * reuse slot + */ + if (disk->number != i) { + MD_BUG(); + err = -EINVAL; + goto abort_unbind_export; + } + } else { + disk->number = i; + } + + disk->raid_disk = disk->number; + disk->major = MAJOR(dev); + disk->minor = MINOR(dev); + + if (mddev->pers->diskop(mddev, &disk, DISKOP_HOT_ADD_DISK)) { + MD_BUG(); + err = -EINVAL; + goto abort_unbind_export; + } + + mark_disk_spare(disk); + mddev->sb->nr_disks++; + mddev->sb->spare_disks++; + mddev->sb->working_disks++; + + mddev->sb_dirty = 1; + + md_update_sb(mddev); + + /* + * Kick recovery, maybe this spare has to be added to the + * array immediately. + */ + md_recover_arrays(); + + return 0; + +abort_unbind_export: + unbind_rdev_from_array(rdev); + +abort_export: + export_rdev(rdev); + return err; +} + +#define SET_SB(x) mddev->sb->x = info->x +static int set_array_info (mddev_t * mddev, mdu_array_info_t *info) +{ + + if (alloc_array_sb(mddev)) + return -ENOMEM; + + mddev->sb->major_version = MD_MAJOR_VERSION; + mddev->sb->minor_version = MD_MINOR_VERSION; + mddev->sb->patch_version = MD_PATCHLEVEL_VERSION; + mddev->sb->ctime = CURRENT_TIME; + + SET_SB(level); + SET_SB(size); + SET_SB(nr_disks); + SET_SB(raid_disks); + SET_SB(md_minor); + SET_SB(not_persistent); + + SET_SB(state); + SET_SB(active_disks); + SET_SB(working_disks); + SET_SB(failed_disks); + SET_SB(spare_disks); + + SET_SB(layout); + SET_SB(chunk_size); + + mddev->sb->md_magic = MD_SB_MAGIC; + + /* + * Generate a 128 bit UUID + */ + get_random_bytes(&mddev->sb->set_uuid0, 4); + get_random_bytes(&mddev->sb->set_uuid1, 4); + get_random_bytes(&mddev->sb->set_uuid2, 4); + get_random_bytes(&mddev->sb->set_uuid3, 4); + + return 0; +} +#undef SET_SB + +static int set_disk_info (mddev_t * mddev, void * arg) +{ + printk("md: not yet"); + return -EINVAL; +} + +static int clear_array (mddev_t * mddev) +{ + printk("md: not yet"); + return -EINVAL; +} + +static int write_raid_info (mddev_t * mddev) +{ + printk("md: not yet"); + return -EINVAL; +} + +static int protect_array (mddev_t * mddev) +{ + printk("md: not yet"); + return -EINVAL; +} + +static int unprotect_array (mddev_t * mddev) +{ + printk("md: not yet"); + return -EINVAL; +} + +static int set_disk_faulty (mddev_t *mddev, kdev_t dev) +{ + int ret; + + fsync_dev(mddev_to_kdev(mddev)); + ret = md_error(mddev, dev); + return ret; +} + +static int md_ioctl (struct inode *inode, struct file *file, + unsigned int cmd, unsigned long arg) +{ + unsigned int minor; + int err = 0; + struct hd_geometry *loc = (struct hd_geometry *) arg; + mddev_t *mddev = NULL; + kdev_t dev; + + if (!md_capable_admin()) + return -EACCES; + + dev = inode->i_rdev; + minor = MINOR(dev); + if (minor >= MAX_MD_DEVS) + return -EINVAL; + + /* + * Commands dealing with the RAID driver but not any + * particular array: + */ + switch (cmd) + { + case RAID_VERSION: + err = get_version((void *)arg); + goto done; + + case PRINT_RAID_DEBUG: + err = 0; + md_print_devices(); + goto done_unlock; + +#ifndef MODULE + case RAID_AUTORUN: + err = 0; + autostart_arrays(); + goto done; +#endif + + case BLKGETSIZE: /* Return device size */ + if (!arg) { + err = -EINVAL; + goto abort; + } + err = md_put_user(md_hd_struct[minor].nr_sects, + (long *) arg); + goto done; + + case BLKFLSBUF: + fsync_dev(dev); + invalidate_buffers(dev); + goto done; + + case BLKRASET: + if (arg > 0xff) { + err = -EINVAL; + goto abort; + } + read_ahead[MAJOR(dev)] = arg; + goto done; + + case BLKRAGET: + if (!arg) { + err = -EINVAL; + goto abort; + } + err = md_put_user (read_ahead[ + MAJOR(dev)], (long *) arg); + goto done; + default:; + } + + /* + * Commands creating/starting a new array: + */ + + mddev = kdev_to_mddev(dev); + + switch (cmd) + { + case SET_ARRAY_INFO: + case START_ARRAY: + if (mddev) { + printk("md: array md%d already exists!\n", + mdidx(mddev)); + err = -EEXIST; + goto abort; + } + default:; + } + switch (cmd) + { + case SET_ARRAY_INFO: + mddev = alloc_mddev(dev); + if (!mddev) { + err = -ENOMEM; + goto abort; + } + atomic_inc(&mddev->active); + + /* + * alloc_mddev() should possibly self-lock. + */ + err = lock_mddev(mddev); + if (err) { + printk("md: ioctl, reason %d, cmd %d\n", err, cmd); + goto abort; + } + + if (mddev->sb) { + printk("md: array md%d already has a superblock!\n", + mdidx(mddev)); + err = -EBUSY; + goto abort_unlock; + } + if (arg) { + mdu_array_info_t info; + if (md_copy_from_user(&info, (void*)arg, sizeof(info))) { + err = -EFAULT; + goto abort_unlock; + } + err = set_array_info(mddev, &info); + if (err) { + printk("md: couldnt set array info. %d\n", err); + goto abort_unlock; + } + } + goto done_unlock; + + case START_ARRAY: + /* + * possibly make it lock the array ... + */ + err = autostart_array((kdev_t)arg, dev); + if (err) { + printk("md: autostart %s failed!\n", + partition_name((kdev_t)arg)); + goto abort; + } + goto done; + + default:; + } + + /* + * Commands querying/configuring an existing array: + */ + + if (!mddev) { + err = -ENODEV; + goto abort; + } + err = lock_mddev(mddev); + if (err) { + printk("md: ioctl lock interrupted, reason %d, cmd %d\n",err, cmd); + goto abort; + } + /* if we don't have a superblock yet, only ADD_NEW_DISK or STOP_ARRAY is allowed */ + if (!mddev->sb && cmd != ADD_NEW_DISK && cmd != STOP_ARRAY && cmd != RUN_ARRAY) { + err = -ENODEV; + goto abort_unlock; + } + + /* + * Commands even a read-only array can execute: + */ + switch (cmd) + { + case GET_ARRAY_INFO: + err = get_array_info(mddev, (void *)arg); + goto done_unlock; + + case GET_DISK_INFO: + err = get_disk_info(mddev, (void *)arg); + goto done_unlock; + + case RESTART_ARRAY_RW: + err = restart_array(mddev); + goto done_unlock; + + case STOP_ARRAY: + if (!(err = do_md_stop (mddev, 0))) + mddev = NULL; + goto done_unlock; + + case STOP_ARRAY_RO: + err = do_md_stop (mddev, 1); + goto done_unlock; + + /* + * We have a problem here : there is no easy way to give a CHS + * virtual geometry. We currently pretend that we have a 2 heads + * 4 sectors (with a BIG number of cylinders...). This drives + * dosfs just mad... ;-) + */ + case HDIO_GETGEO: + if (!loc) { + err = -EINVAL; + goto abort_unlock; + } + err = md_put_user (2, (char *) &loc->heads); + if (err) + goto abort_unlock; + err = md_put_user (4, (char *) &loc->sectors); + if (err) + goto abort_unlock; + err = md_put_user (md_hd_struct[mdidx(mddev)].nr_sects/8, + (short *) &loc->cylinders); + if (err) + goto abort_unlock; + err = md_put_user (md_hd_struct[minor].start_sect, + (long *) &loc->start); + goto done_unlock; + } + + /* + * The remaining ioctls are changing the state of the + * superblock, so we do not allow read-only arrays + * here: + */ + if (mddev->ro) { + err = -EROFS; + goto abort_unlock; + } + + switch (cmd) + { + case CLEAR_ARRAY: + err = clear_array(mddev); + goto done_unlock; + + case ADD_NEW_DISK: + { + mdu_disk_info_t info; + if (md_copy_from_user(&info, (void*)arg, sizeof(info))) + err = -EFAULT; + else + err = add_new_disk(mddev, &info); + goto done_unlock; + } + case HOT_REMOVE_DISK: + err = hot_remove_disk(mddev, (kdev_t)arg); + goto done_unlock; + + case HOT_ADD_DISK: + err = hot_add_disk(mddev, (kdev_t)arg); + goto done_unlock; + + case SET_DISK_INFO: + err = set_disk_info(mddev, (void *)arg); + goto done_unlock; + + case WRITE_RAID_INFO: + err = write_raid_info(mddev); + goto done_unlock; + + case UNPROTECT_ARRAY: + err = unprotect_array(mddev); + goto done_unlock; + + case PROTECT_ARRAY: + err = protect_array(mddev); + goto done_unlock; + + case SET_DISK_FAULTY: + err = set_disk_faulty(mddev, (kdev_t)arg); + goto done_unlock; + + case RUN_ARRAY: + { +/* The data is never used.... + mdu_param_t param; + err = md_copy_from_user(¶m, (mdu_param_t *)arg, + sizeof(param)); + if (err) + goto abort_unlock; +*/ + err = do_md_run (mddev); + /* + * we have to clean up the mess if + * the array cannot be run for some + * reason ... + */ + if (err) { + mddev->sb_dirty = 0; + if (!do_md_stop (mddev, 0)) + mddev = NULL; + } + goto done_unlock; + } + + default: + printk(KERN_WARNING "md: %s(pid %d) used obsolete MD ioctl, upgrade your software to use new ictls.\n", current->comm, current->pid); + err = -EINVAL; + goto abort_unlock; + } + +done_unlock: +abort_unlock: + if (mddev) + unlock_mddev(mddev); + + return err; +done: + if (err) + printk("md: huh12?\n"); +abort: + return err; +} + +static int md_open (struct inode *inode, struct file *file) +{ + /* + * Always succeed, but increment the usage count + */ + mddev_t *mddev = kdev_to_mddev(inode->i_rdev); + if (mddev) + atomic_inc(&mddev->active); + return (0); +} + +static int md_release (struct inode *inode, struct file * file) +{ + mddev_t *mddev = kdev_to_mddev(inode->i_rdev); + if (mddev) + atomic_dec(&mddev->active); + return 0; +} + +static struct block_device_operations md_fops= +{ + open: md_open, + release: md_release, + ioctl: md_ioctl, +}; + + +int md_thread(void * arg) +{ + mdk_thread_t *thread = arg; + struct completion *event; + + md_lock_kernel(); + + /* + * Detach thread + */ + + daemonize(); + + sprintf(current->comm, thread->name); + md_init_signals(); + md_flush_signals(); + thread->tsk = current; + + /* + * md_thread is a 'system-thread', it's priority should be very + * high. We avoid resource deadlocks individually in each + * raid personality. (RAID5 does preallocation) We also use RR and + * the very same RT priority as kswapd, thus we will never get + * into a priority inversion deadlock. + * + * we definitely have to have equal or higher priority than + * bdflush, otherwise bdflush will deadlock if there are too + * many dirty RAID5 blocks. + */ + current->policy = SCHED_OTHER; + current->nice = -20; + md_unlock_kernel(); + + complete(thread->event); + while (thread->run) { + void (*run)(void *data); + DECLARE_WAITQUEUE(wait, current); + + add_wait_queue(&thread->wqueue, &wait); + set_task_state(current, TASK_INTERRUPTIBLE); + if (!test_bit(THREAD_WAKEUP, &thread->flags)) { + dprintk("md: thread %p went to sleep.\n", thread); + schedule(); + dprintk("md: thread %p woke up.\n", thread); + } + current->state = TASK_RUNNING; + remove_wait_queue(&thread->wqueue, &wait); + clear_bit(THREAD_WAKEUP, &thread->flags); + + if ((run=thread->run)) { + run(thread->data); + run_task_queue(&tq_disk); + } + if (md_signal_pending(current)) { + printk("md: %8s(%d) flushing signals.\n", current->comm, + current->pid); + md_flush_signals(); + } + } + complete(thread->event); + return 0; +} + +void md_wakeup_thread(mdk_thread_t *thread) +{ + dprintk("md: waking up MD thread %p.\n", thread); + set_bit(THREAD_WAKEUP, &thread->flags); + wake_up(&thread->wqueue); +} + +mdk_thread_t *md_register_thread (void (*run) (void *), + void *data, const char *name) +{ + mdk_thread_t *thread; + int ret; + struct completion event; + + thread = (mdk_thread_t *) kmalloc + (sizeof(mdk_thread_t), GFP_KERNEL); + if (!thread) + return NULL; + + memset(thread, 0, sizeof(mdk_thread_t)); + md_init_waitqueue_head(&thread->wqueue); + + init_completion(&event); + thread->event = &event; + thread->run = run; + thread->data = data; + thread->name = name; + ret = kernel_thread(md_thread, thread, 0); + if (ret < 0) { + kfree(thread); + return NULL; + } + wait_for_completion(&event); + return thread; +} + +void md_interrupt_thread (mdk_thread_t *thread) +{ + if (!thread->tsk) { + MD_BUG(); + return; + } + printk("md: interrupting MD-thread pid %d\n", thread->tsk->pid); + send_sig(SIGKILL, thread->tsk, 1); +} + +void md_unregister_thread (mdk_thread_t *thread) +{ + struct completion event; + + init_completion(&event); + + thread->event = &event; + thread->run = NULL; + thread->name = NULL; + md_interrupt_thread(thread); + wait_for_completion(&event); + kfree(thread); +} + +void md_recover_arrays (void) +{ + if (!md_recovery_thread) { + MD_BUG(); + return; + } + md_wakeup_thread(md_recovery_thread); +} + + +int md_error (mddev_t *mddev, kdev_t rdev) +{ + mdk_rdev_t * rrdev; + +/* printk("md_error dev:(%d:%d), rdev:(%d:%d), (caller: %p,%p,%p,%p).\n",MAJOR(dev),MINOR(dev),MAJOR(rdev),MINOR(rdev), __builtin_return_address(0),__builtin_return_address(1),__builtin_return_address(2),__builtin_return_address(3)); + */ + if (!mddev) { + MD_BUG(); + return 0; + } + rrdev = find_rdev(mddev, rdev); + if (rrdev->faulty) + return 0; + if (mddev->pers->error_handler == NULL + || mddev->pers->error_handler(mddev,rdev) <= 0) { + free_disk_sb(rrdev); + rrdev->faulty = 1; + } else + return 1; + /* + * if recovery was running, stop it now. + */ + if (mddev->pers->stop_resync) + mddev->pers->stop_resync(mddev); + if (mddev->recovery_running) + md_interrupt_thread(md_recovery_thread); + md_recover_arrays(); + + return 0; +} + +static int status_unused (char * page) +{ + int sz = 0, i = 0; + mdk_rdev_t *rdev; + struct md_list_head *tmp; + + sz += sprintf(page + sz, "unused devices: "); + + ITERATE_RDEV_ALL(rdev,tmp) { + if (!rdev->same_set.next && !rdev->same_set.prev) { + /* + * The device is not yet used by any array. + */ + i++; + sz += sprintf(page + sz, "%s ", + partition_name(rdev->dev)); + } + } + if (!i) + sz += sprintf(page + sz, ""); + + sz += sprintf(page + sz, "\n"); + return sz; +} + + +static int status_resync (char * page, mddev_t * mddev) +{ + int sz = 0; + unsigned long max_blocks, resync, res, dt, db, rt; + + resync = (mddev->curr_resync - atomic_read(&mddev->recovery_active))/2; + max_blocks = mddev->sb->size; + + /* + * Should not happen. + */ + if (!max_blocks) { + MD_BUG(); + return 0; + } + res = (resync/1024)*1000/(max_blocks/1024 + 1); + { + int i, x = res/50, y = 20-x; + sz += sprintf(page + sz, "["); + for (i = 0; i < x; i++) + sz += sprintf(page + sz, "="); + sz += sprintf(page + sz, ">"); + for (i = 0; i < y; i++) + sz += sprintf(page + sz, "."); + sz += sprintf(page + sz, "] "); + } + if (!mddev->recovery_running) + /* + * true resync + */ + sz += sprintf(page + sz, " resync =%3lu.%lu%% (%lu/%lu)", + res/10, res % 10, resync, max_blocks); + else + /* + * recovery ... + */ + sz += sprintf(page + sz, " recovery =%3lu.%lu%% (%lu/%lu)", + res/10, res % 10, resync, max_blocks); + + /* + * We do not want to overflow, so the order of operands and + * the * 100 / 100 trick are important. We do a +1 to be + * safe against division by zero. We only estimate anyway. + * + * dt: time from mark until now + * db: blocks written from mark until now + * rt: remaining time + */ + dt = ((jiffies - mddev->resync_mark) / HZ); + if (!dt) dt++; + db = resync - (mddev->resync_mark_cnt/2); + rt = (dt * ((max_blocks-resync) / (db/100+1)))/100; + + sz += sprintf(page + sz, " finish=%lu.%lumin", rt / 60, (rt % 60)/6); + + sz += sprintf(page + sz, " speed=%ldK/sec", db/dt); + + return sz; +} + +static int md_status_read_proc(char *page, char **start, off_t off, + int count, int *eof, void *data) +{ + int sz = 0, j, size; + struct md_list_head *tmp, *tmp2; + mdk_rdev_t *rdev; + mddev_t *mddev; + + sz += sprintf(page + sz, "Personalities : "); + for (j = 0; j < MAX_PERSONALITY; j++) + if (pers[j]) + sz += sprintf(page+sz, "[%s] ", pers[j]->name); + + sz += sprintf(page+sz, "\n"); + + + sz += sprintf(page+sz, "read_ahead "); + if (read_ahead[MD_MAJOR] == INT_MAX) + sz += sprintf(page+sz, "not set\n"); + else + sz += sprintf(page+sz, "%d sectors\n", read_ahead[MD_MAJOR]); + + ITERATE_MDDEV(mddev,tmp) { + sz += sprintf(page + sz, "md%d : %sactive", mdidx(mddev), + mddev->pers ? "" : "in"); + if (mddev->pers) { + if (mddev->ro) + sz += sprintf(page + sz, " (read-only)"); + sz += sprintf(page + sz, " %s", mddev->pers->name); + } + + size = 0; + ITERATE_RDEV(mddev,rdev,tmp2) { + sz += sprintf(page + sz, " %s[%d]", + partition_name(rdev->dev), rdev->desc_nr); + if (rdev->faulty) { + sz += sprintf(page + sz, "(F)"); + continue; + } + size += rdev->size; + } + + if (mddev->nb_dev) { + if (mddev->pers) + sz += sprintf(page + sz, "\n %d blocks", + md_size[mdidx(mddev)]); + else + sz += sprintf(page + sz, "\n %d blocks", size); + } + + if (!mddev->pers) { + sz += sprintf(page+sz, "\n"); + continue; + } + + sz += mddev->pers->status (page+sz, mddev); + + sz += sprintf(page+sz, "\n "); + if (mddev->curr_resync) { + sz += status_resync (page+sz, mddev); + } else { + if (md_atomic_read(&mddev->resync_sem.count) != 1) + sz += sprintf(page + sz, " resync=DELAYED"); + } + sz += sprintf(page + sz, "\n"); + } + sz += status_unused (page + sz); + + return sz; +} + +int register_md_personality (int pnum, mdk_personality_t *p) +{ + if (pnum >= MAX_PERSONALITY) + return -EINVAL; + + if (pers[pnum]) + return -EBUSY; + + pers[pnum] = p; + printk(KERN_INFO "md: %s personality registered\n", p->name); + return 0; +} + +int unregister_md_personality (int pnum) +{ + if (pnum >= MAX_PERSONALITY) + return -EINVAL; + + printk(KERN_INFO "md: %s personality unregistered\n", pers[pnum]->name); + pers[pnum] = NULL; + return 0; +} + +static mdp_disk_t *get_spare(mddev_t *mddev) +{ + mdp_super_t *sb = mddev->sb; + mdp_disk_t *disk; + mdk_rdev_t *rdev; + struct md_list_head *tmp; + + ITERATE_RDEV(mddev,rdev,tmp) { + if (rdev->faulty) + continue; + if (!rdev->sb) { + MD_BUG(); + continue; + } + disk = &sb->disks[rdev->desc_nr]; + if (disk_faulty(disk)) { + MD_BUG(); + continue; + } + if (disk_active(disk)) + continue; + return disk; + } + return NULL; +} + +static unsigned int sync_io[DK_MAX_MAJOR][DK_MAX_DISK]; +void md_sync_acct(kdev_t dev, unsigned long nr_sectors) +{ + unsigned int major = MAJOR(dev); + unsigned int index; + + index = disk_index(dev); + if ((index >= DK_MAX_DISK) || (major >= DK_MAX_MAJOR)) + return; + + sync_io[major][index] += nr_sectors; +} + +static int is_mddev_idle (mddev_t *mddev) +{ + mdk_rdev_t * rdev; + struct md_list_head *tmp; + int idle; + unsigned long curr_events; + + idle = 1; + ITERATE_RDEV(mddev,rdev,tmp) { + int major = MAJOR(rdev->dev); + int idx = disk_index(rdev->dev); + + if ((idx >= DK_MAX_DISK) || (major >= DK_MAX_MAJOR)) + continue; + + curr_events = kstat.dk_drive_rblk[major][idx] + + kstat.dk_drive_wblk[major][idx] ; + curr_events -= sync_io[major][idx]; +// printk("md: events(major: %d, idx: %d): %ld\n", major, idx, curr_events); + if ((curr_events - rdev->last_events) > 32) { +// printk("!I(%ld)%x", curr_events - rdev->last_events, rdev->dev); + rdev->last_events = curr_events; + idle = 0; + } + } + return idle; +} + +MD_DECLARE_WAIT_QUEUE_HEAD(resync_wait); + +void md_done_sync(mddev_t *mddev, int blocks, int ok) +{ + /* another "blocks" (512byte) blocks have been synced */ + atomic_sub(blocks, &mddev->recovery_active); + wake_up(&mddev->recovery_wait); + if (!ok) { + // stop recovery, signal do_sync .... + } +} + +#define SYNC_MARKS 10 +#define SYNC_MARK_STEP (3*HZ) +int md_do_sync(mddev_t *mddev, mdp_disk_t *spare) +{ + mddev_t *mddev2; + unsigned int max_sectors, currspeed, + j, window, err, serialize; + unsigned long mark[SYNC_MARKS]; + unsigned long mark_cnt[SYNC_MARKS]; + int last_mark,m; + struct md_list_head *tmp; + unsigned long last_check; + + + err = down_interruptible(&mddev->resync_sem); + if (err) + goto out_nolock; + +recheck: + serialize = 0; + ITERATE_MDDEV(mddev2,tmp) { + if (mddev2 == mddev) + continue; + if (mddev2->curr_resync && match_mddev_units(mddev,mddev2)) { + printk(KERN_INFO "md: serializing resync, md%d shares one or more physical units with md%d!\n", mdidx(mddev), mdidx(mddev2)); + serialize = 1; + break; + } + } + if (serialize) { + interruptible_sleep_on(&resync_wait); + if (md_signal_pending(current)) { + md_flush_signals(); + err = -EINTR; + goto out; + } + goto recheck; + } + + mddev->curr_resync = 1; + + max_sectors = mddev->sb->size<<1; + + printk(KERN_INFO "md: syncing RAID array md%d\n", mdidx(mddev)); + printk(KERN_INFO "md: minimum _guaranteed_ reconstruction speed: %d KB/sec/disc.\n", + sysctl_speed_limit_min); + printk(KERN_INFO "md: using maximum available idle IO bandwith (but not more than %d KB/sec) for reconstruction.\n", sysctl_speed_limit_max); + + /* + * Resync has low priority. + */ + current->nice = 19; + + is_mddev_idle(mddev); /* this also initializes IO event counters */ + for (m = 0; m < SYNC_MARKS; m++) { + mark[m] = jiffies; + mark_cnt[m] = 0; + } + last_mark = 0; + mddev->resync_mark = mark[last_mark]; + mddev->resync_mark_cnt = mark_cnt[last_mark]; + + /* + * Tune reconstruction: + */ + window = MAX_READAHEAD*(PAGE_SIZE/512); + printk(KERN_INFO "md: using %dk window, over a total of %d blocks.\n",window/2,max_sectors/2); + + atomic_set(&mddev->recovery_active, 0); + init_waitqueue_head(&mddev->recovery_wait); + last_check = 0; + for (j = 0; j < max_sectors;) { + int sectors; + + sectors = mddev->pers->sync_request(mddev, j); + + if (sectors < 0) { + err = sectors; + goto out; + } + atomic_add(sectors, &mddev->recovery_active); + j += sectors; + mddev->curr_resync = j; + + if (last_check + window > j) + continue; + + run_task_queue(&tq_disk); //?? + + if (jiffies >= mark[last_mark] + SYNC_MARK_STEP ) { + /* step marks */ + int next = (last_mark+1) % SYNC_MARKS; + + mddev->resync_mark = mark[next]; + mddev->resync_mark_cnt = mark_cnt[next]; + mark[next] = jiffies; + mark_cnt[next] = j - atomic_read(&mddev->recovery_active); + last_mark = next; + } + + + if (md_signal_pending(current)) { + /* + * got a signal, exit. + */ + mddev->curr_resync = 0; + printk("md: md_do_sync() got signal ... exiting\n"); + md_flush_signals(); + err = -EINTR; + goto out; + } + + /* + * this loop exits only if either when we are slower than + * the 'hard' speed limit, or the system was IO-idle for + * a jiffy. + * the system might be non-idle CPU-wise, but we only care + * about not overloading the IO subsystem. (things like an + * e2fsck being done on the RAID array should execute fast) + */ +repeat: + if (md_need_resched(current)) + schedule(); + + currspeed = (j-mddev->resync_mark_cnt)/2/((jiffies-mddev->resync_mark)/HZ +1) +1; + + if (currspeed > sysctl_speed_limit_min) { + current->nice = 19; + + if ((currspeed > sysctl_speed_limit_max) || + !is_mddev_idle(mddev)) { + current->state = TASK_INTERRUPTIBLE; + md_schedule_timeout(HZ/4); + if (!md_signal_pending(current)) + goto repeat; + } + } else + current->nice = -20; + } + printk(KERN_INFO "md: md%d: sync done.\n",mdidx(mddev)); + err = 0; + /* + * this also signals 'finished resyncing' to md_stop + */ +out: + wait_event(mddev->recovery_wait, atomic_read(&mddev->recovery_active)==0); + up(&mddev->resync_sem); +out_nolock: + mddev->curr_resync = 0; + wake_up(&resync_wait); + return err; +} + + +/* + * This is a kernel thread which syncs a spare disk with the active array + * + * the amount of foolproofing might seem to be a tad excessive, but an + * early (not so error-safe) version of raid1syncd synced the first 0.5 gigs + * of my root partition with the first 0.5 gigs of my /home partition ... so + * i'm a bit nervous ;) + */ +void md_do_recovery (void *data) +{ + int err; + mddev_t *mddev; + mdp_super_t *sb; + mdp_disk_t *spare; + struct md_list_head *tmp; + + printk(KERN_INFO "md: recovery thread got woken up ...\n"); +restart: + ITERATE_MDDEV(mddev,tmp) { + sb = mddev->sb; + if (!sb) + continue; + if (mddev->recovery_running) + continue; + if (sb->active_disks == sb->raid_disks) + continue; + if (!sb->spare_disks) { + printk(KERN_ERR "md%d: no spare disk to reconstruct array! -- continuing in degraded mode\n", mdidx(mddev)); + continue; + } + /* + * now here we get the spare and resync it. + */ + if ((spare = get_spare(mddev)) == NULL) + continue; + printk(KERN_INFO "md%d: resyncing spare disk %s to replace failed disk\n", mdidx(mddev), partition_name(MKDEV(spare->major,spare->minor))); + if (!mddev->pers->diskop) + continue; + if (mddev->pers->diskop(mddev, &spare, DISKOP_SPARE_WRITE)) + continue; + down(&mddev->recovery_sem); + mddev->recovery_running = 1; + err = md_do_sync(mddev, spare); + if (err == -EIO) { + printk(KERN_INFO "md%d: spare disk %s failed, skipping to next spare.\n", mdidx(mddev), partition_name(MKDEV(spare->major,spare->minor))); + if (!disk_faulty(spare)) { + mddev->pers->diskop(mddev,&spare,DISKOP_SPARE_INACTIVE); + mark_disk_faulty(spare); + mark_disk_nonsync(spare); + mark_disk_inactive(spare); + sb->spare_disks--; + sb->working_disks--; + sb->failed_disks++; + } + } else + if (disk_faulty(spare)) + mddev->pers->diskop(mddev, &spare, + DISKOP_SPARE_INACTIVE); + if (err == -EINTR || err == -ENOMEM) { + /* + * Recovery got interrupted, or ran out of mem ... + * signal back that we have finished using the array. + */ + mddev->pers->diskop(mddev, &spare, + DISKOP_SPARE_INACTIVE); + up(&mddev->recovery_sem); + mddev->recovery_running = 0; + continue; + } else { + mddev->recovery_running = 0; + up(&mddev->recovery_sem); + } + if (!disk_faulty(spare)) { + /* + * the SPARE_ACTIVE diskop possibly changes the + * pointer too + */ + mddev->pers->diskop(mddev, &spare, DISKOP_SPARE_ACTIVE); + mark_disk_sync(spare); + mark_disk_active(spare); + sb->active_disks++; + sb->spare_disks--; + } + mddev->sb_dirty = 1; + md_update_sb(mddev); + goto restart; + } + printk(KERN_INFO "md: recovery thread finished ...\n"); + +} + +int md_notify_reboot(struct notifier_block *this, + unsigned long code, void *x) +{ + struct md_list_head *tmp; + mddev_t *mddev; + + if ((code == MD_SYS_DOWN) || (code == MD_SYS_HALT) + || (code == MD_SYS_POWER_OFF)) { + + printk(KERN_INFO "md: stopping all md devices.\n"); + + ITERATE_MDDEV(mddev,tmp) + do_md_stop (mddev, 1); + /* + * certain more exotic SCSI devices are known to be + * volatile wrt too early system reboots. While the + * right place to handle this issue is the given + * driver, we do want to have a safe RAID driver ... + */ + md_mdelay(1000*1); + } + return NOTIFY_DONE; +} + +struct notifier_block md_notifier = { + md_notify_reboot, + NULL, + 0 +}; + +static void md_geninit (void) +{ + int i; + + for(i = 0; i < MAX_MD_DEVS; i++) { + md_blocksizes[i] = 1024; + md_size[i] = 0; + md_hardsect_sizes[i] = 512; + md_maxreadahead[i] = MD_READAHEAD; + } + blksize_size[MAJOR_NR] = md_blocksizes; + blk_size[MAJOR_NR] = md_size; + max_readahead[MAJOR_NR] = md_maxreadahead; + hardsect_size[MAJOR_NR] = md_hardsect_sizes; + + dprintk("md: sizeof(mdp_super_t) = %d\n", (int)sizeof(mdp_super_t)); + +#ifdef CONFIG_PROC_FS + create_proc_read_entry("mdstat", 0, NULL, md_status_read_proc, NULL); +#endif +} + +int md__init md_init (void) +{ + static char * name = "mdrecoveryd"; + int minor; + + printk (KERN_INFO "md: md driver %d.%d.%d MAX_MD_DEVS=%d, MD_SB_DISKS=%d\n", + MD_MAJOR_VERSION, MD_MINOR_VERSION, + MD_PATCHLEVEL_VERSION, MAX_MD_DEVS, MD_SB_DISKS); + + if (devfs_register_blkdev (MAJOR_NR, "md", &md_fops)) + { + printk (KERN_ALERT "md: Unable to get major %d for md\n", MAJOR_NR); + return (-1); + } + devfs_handle = devfs_mk_dir (NULL, "md", NULL); + /* we don't use devfs_register_series because we want to fill md_hd_struct */ + for (minor=0; minor < MAX_MD_DEVS; ++minor) { + char devname[128]; + sprintf (devname, "%u", minor); + md_hd_struct[minor].de = devfs_register (devfs_handle, + devname, DEVFS_FL_DEFAULT, MAJOR_NR, minor, + S_IFBLK | S_IRUSR | S_IWUSR, &md_fops, NULL); + } + + /* forward all md request to md_make_request */ + blk_queue_make_request(BLK_DEFAULT_QUEUE(MAJOR_NR), md_make_request); + + + read_ahead[MAJOR_NR] = INT_MAX; + md_gendisk.next = gendisk_head; + + gendisk_head = &md_gendisk; + + md_recovery_thread = md_register_thread(md_do_recovery, NULL, name); + if (!md_recovery_thread) + printk(KERN_ALERT "md: bug: couldn't allocate md_recovery_thread\n"); + + md_register_reboot_notifier(&md_notifier); + raid_table_header = register_sysctl_table(raid_root_table, 1); + + md_geninit(); + return (0); +} + + +#ifndef MODULE + +/* + * When md (and any require personalities) are compiled into the kernel + * (not a module), arrays can be assembles are boot time using with AUTODETECT + * where specially marked partitions are registered with md_autodetect_dev(), + * and with MD_BOOT where devices to be collected are given on the boot line + * with md=..... + * The code for that is here. + */ + +struct { + int set; + int noautodetect; +} raid_setup_args md__initdata; + +/* + * Searches all registered partitions for autorun RAID arrays + * at boot time. + */ +static int detected_devices[128]; +static int dev_cnt; + +void md_autodetect_dev (kdev_t dev) +{ + if (dev_cnt >= 0 && dev_cnt < 127) + detected_devices[dev_cnt++] = dev; +} + + +static void autostart_arrays (void) +{ + mdk_rdev_t *rdev; + int i; + + printk(KERN_INFO "md: Autodetecting RAID arrays.\n"); + + for (i = 0; i < dev_cnt; i++) { + kdev_t dev = detected_devices[i]; + + if (md_import_device(dev,1)) { + printk(KERN_ALERT "md: could not import %s!\n", + partition_name(dev)); + continue; + } + /* + * Sanity checks: + */ + rdev = find_rdev_all(dev); + if (!rdev) { + MD_BUG(); + continue; + } + if (rdev->faulty) { + MD_BUG(); + continue; + } + md_list_add(&rdev->pending, &pending_raid_disks); + } + dev_cnt = 0; + + autorun_devices(-1); +} + +static struct { + char device_set [MAX_MD_DEVS]; + int pers[MAX_MD_DEVS]; + int chunk[MAX_MD_DEVS]; + char *device_names[MAX_MD_DEVS]; +} md_setup_args md__initdata; + +/* + * Parse the command-line parameters given our kernel, but do not + * actually try to invoke the MD device now; that is handled by + * md_setup_drive after the low-level disk drivers have initialised. + * + * 27/11/1999: Fixed to work correctly with the 2.3 kernel (which + * assigns the task of parsing integer arguments to the + * invoked program now). Added ability to initialise all + * the MD devices (by specifying multiple "md=" lines) + * instead of just one. -- KTK + * 18May2000: Added support for persistant-superblock arrays: + * md=n,0,factor,fault,device-list uses RAID0 for device n + * md=n,-1,factor,fault,device-list uses LINEAR for device n + * md=n,device-list reads a RAID superblock from the devices + * elements in device-list are read by name_to_kdev_t so can be + * a hex number or something like /dev/hda1 /dev/sdb + * 2001-06-03: Dave Cinege + * Shifted name_to_kdev_t() and related operations to md_set_drive() + * for later execution. Rewrote section to make devfs compatible. + */ +static int md__init md_setup(char *str) +{ + int minor, level, factor, fault; + char *pername = ""; + char *str1 = str; + + if (get_option(&str, &minor) != 2) { /* MD Number */ + printk("md: Too few arguments supplied to md=.\n"); + return 0; + } + if (minor >= MAX_MD_DEVS) { + printk ("md: md=%d, Minor device number too high.\n", minor); + return 0; + } else if (md_setup_args.device_names[minor]) { + printk ("md: md=%d, Specified more then once. Replacing previous definition.\n", minor); + } + switch (get_option(&str, &level)) { /* RAID Personality */ + case 2: /* could be 0 or -1.. */ + if (level == 0 || level == -1) { + if (get_option(&str, &factor) != 2 || /* Chunk Size */ + get_option(&str, &fault) != 2) { + printk("md: Too few arguments supplied to md=.\n"); + return 0; + } + md_setup_args.pers[minor] = level; + md_setup_args.chunk[minor] = 1 << (factor+12); + switch(level) { + case -1: + level = LINEAR; + pername = "linear"; + break; + case 0: + level = RAID0; + pername = "raid0"; + break; + default: + printk ("md: The kernel has not been configured for raid%d" + " support!\n", level); + return 0; + } + md_setup_args.pers[minor] = level; + break; + } + /* FALL THROUGH */ + case 1: /* the first device is numeric */ + str = str1; + /* FALL THROUGH */ + case 0: + md_setup_args.pers[minor] = 0; + pername="super-block"; + } + + printk ("md: Will configure md%d (%s) from %s, below.\n", + minor, pername, str); + md_setup_args.device_names[minor] = str; + + return 1; +} + +extern kdev_t name_to_kdev_t(char *line) md__init; +void md__init md_setup_drive(void) +{ + int minor, i; + kdev_t dev; + mddev_t*mddev; + kdev_t devices[MD_SB_DISKS+1]; + + for (minor = 0; minor < MAX_MD_DEVS; minor++) { + int err = 0; + char *devname; + mdu_disk_info_t dinfo; + + if ((devname = md_setup_args.device_names[minor]) == 0) continue; + + for (i = 0; i < MD_SB_DISKS && devname != 0; i++) { + + char *p; + void *handle; + + if ((p = strchr(devname, ',')) != NULL) + *p++ = 0; + + dev = name_to_kdev_t(devname); + handle = devfs_find_handle(NULL, devname, MAJOR (dev), MINOR (dev), + DEVFS_SPECIAL_BLK, 1); + if (handle != 0) { + unsigned major, minor; + devfs_get_maj_min(handle, &major, &minor); + dev = MKDEV(major, minor); + } + if (dev == 0) { + printk ("md: Unknown device name: %s\n", devname); + break; + } + + devices[i] = dev; + md_setup_args.device_set[minor] = 1; + + devname = p; + } + devices[i] = 0; + + if (md_setup_args.device_set[minor] == 0) + continue; + + if (mddev_map[minor].mddev) { + printk("md: Ignoring md=%d, already autodetected. (Use raid=noautodetect)\n", minor); + continue; + } + printk("md: Loading md%d: %s\n", minor, md_setup_args.device_names[minor]); + + mddev = alloc_mddev(MKDEV(MD_MAJOR,minor)); + if (mddev == NULL) { + printk("md: kmalloc failed - cannot start array %d\n", minor); + continue; + } + if (md_setup_args.pers[minor]) { + /* non-persistent */ + mdu_array_info_t ainfo; + ainfo.level = pers_to_level(md_setup_args.pers[minor]); + ainfo.size = 0; + ainfo.nr_disks =0; + ainfo.raid_disks =0; + ainfo.md_minor =minor; + ainfo.not_persistent = 1; + + ainfo.state = MD_SB_CLEAN; + ainfo.active_disks = 0; + ainfo.working_disks = 0; + ainfo.failed_disks = 0; + ainfo.spare_disks = 0; + ainfo.layout = 0; + ainfo.chunk_size = md_setup_args.chunk[minor]; + err = set_array_info(mddev, &ainfo); + for (i = 0; !err && (dev = devices[i]); i++) { + dinfo.number = i; + dinfo.raid_disk = i; + dinfo.state = (1<sb->nr_disks++; + mddev->sb->raid_disks++; + mddev->sb->active_disks++; + mddev->sb->working_disks++; + err = add_new_disk (mddev, &dinfo); + } + } else { + /* persistent */ + for (i = 0; (dev = devices[i]); i++) { + dinfo.major = MAJOR(dev); + dinfo.minor = MINOR(dev); + add_new_disk (mddev, &dinfo); + } + } + if (!err) + err = do_md_run(mddev); + if (err) { + mddev->sb_dirty = 0; + do_md_stop(mddev, 0); + printk("md: starting md%d failed\n", minor); + } + } +} + +static int md__init raid_setup(char *str) +{ + int len, pos; + + len = strlen(str) + 1; + pos = 0; + + while (pos < len) { + char *comma = strchr(str+pos, ','); + int wlen; + if (comma) + wlen = (comma-str)-pos; + else wlen = (len-1)-pos; + + if (strncmp(str, "noautodetect", wlen) == 0) + raid_setup_args.noautodetect = 1; + pos += wlen+1; + } + raid_setup_args.set = 1; + return 1; +} + +int md__init md_run_setup(void) +{ + if (raid_setup_args.noautodetect) + printk(KERN_INFO "md: Skipping autodetection of RAID arrays. (raid=noautodetect)\n"); + else + autostart_arrays(); + md_setup_drive(); + return 0; +} + +__setup("raid=", raid_setup); +__setup("md=", md_setup); + +__initcall(md_init); +__initcall(md_run_setup); + +#else /* It is a MODULE */ + +int init_module (void) +{ + return md_init(); +} + +static void free_device_names(void) +{ + while (device_names.next != &device_names) { + struct list_head *tmp = device_names.next; + list_del(tmp); + kfree(tmp); + } +} + + +void cleanup_module (void) +{ + struct gendisk **gendisk_ptr; + + md_unregister_thread(md_recovery_thread); + devfs_unregister(devfs_handle); + + devfs_unregister_blkdev(MAJOR_NR,"md"); + unregister_reboot_notifier(&md_notifier); + unregister_sysctl_table(raid_table_header); +#ifdef CONFIG_PROC_FS + remove_proc_entry("mdstat", NULL); +#endif + + gendisk_ptr = &gendisk_head; + while (*gendisk_ptr) { + if (*gendisk_ptr == &md_gendisk) { + *gendisk_ptr = md_gendisk.next; + break; + } + gendisk_ptr = & (*gendisk_ptr)->next; + } + blk_dev[MAJOR_NR].queue = NULL; + blksize_size[MAJOR_NR] = NULL; + blk_size[MAJOR_NR] = NULL; + max_readahead[MAJOR_NR] = NULL; + hardsect_size[MAJOR_NR] = NULL; + + free_device_names(); + +} +#endif + +MD_EXPORT_SYMBOL(md_size); +MD_EXPORT_SYMBOL(register_md_personality); +MD_EXPORT_SYMBOL(unregister_md_personality); +MD_EXPORT_SYMBOL(partition_name); +MD_EXPORT_SYMBOL(md_error); +MD_EXPORT_SYMBOL(md_do_sync); +MD_EXPORT_SYMBOL(md_sync_acct); +MD_EXPORT_SYMBOL(md_done_sync); +MD_EXPORT_SYMBOL(md_recover_arrays); +MD_EXPORT_SYMBOL(md_register_thread); +MD_EXPORT_SYMBOL(md_unregister_thread); +MD_EXPORT_SYMBOL(md_update_sb); +MD_EXPORT_SYMBOL(md_wakeup_thread); +MD_EXPORT_SYMBOL(md_print_devices); +MD_EXPORT_SYMBOL(find_rdev_nr); +MD_EXPORT_SYMBOL(md_interrupt_thread); +MD_EXPORT_SYMBOL(mddev_map); +MD_EXPORT_SYMBOL(md_check_ordering); + diff -urpN linux-2.4.9-linus/drivers/md/xor.c linux-2.4.9-larpage/drivers/md/xor.c --- linux-2.4.9-linus/drivers/md/xor.c 2001-01-22 14:49:36.000000000 -0800 +++ linux-2.4.9-larpage/drivers/md/xor.c 2002-11-20 02:02:45.000000000 -0800 @@ -57,7 +57,7 @@ xor_block(unsigned int count, struct buf /* Set of all registered templates. */ static struct xor_block_template *template_list; -#define BENCH_SIZE (PAGE_SIZE) +#define BENCH_SIZE (MMUPAGE_SIZE) static void do_xor_speed(struct xor_block_template *tmpl, void *b1, void *b2) @@ -101,13 +101,14 @@ calibrate_xor_block(void) { void *b1, *b2; struct xor_block_template *f, *fastest; + int order = get_order(4*BENCH_SIZE); - b1 = (void *) md__get_free_pages(GFP_KERNEL, 2); + b1 = (void *) md__get_free_pages(GFP_KERNEL, order); if (! b1) { printk("raid5: Yikes! No memory available.\n"); return -ENOMEM; } - b2 = b1 + 2*PAGE_SIZE + BENCH_SIZE; + b2 = b1 + 3*BENCH_SIZE; printk(KERN_INFO "raid5: measuring checksumming speed\n"); sti(); @@ -118,7 +119,7 @@ calibrate_xor_block(void) #undef xor_speed - free_pages((unsigned long)b1, 2); + free_pages((unsigned long)b1, order); fastest = template_list; for (f = fastest; f; f = f->next) diff -urpN linux-2.4.9-linus/drivers/md/xor.c.orig linux-2.4.9-larpage/drivers/md/xor.c.orig --- linux-2.4.9-linus/drivers/md/xor.c.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/drivers/md/xor.c.orig 2002-11-20 02:02:45.000000000 -0800 @@ -0,0 +1,142 @@ +/* + * xor.c : Multiple Devices driver for Linux + * + * Copyright (C) 1996, 1997, 1998, 1999, 2000, + * Ingo Molnar, Matti Aarnio, Jakub Jelinek, Richard Henderson. + * + * Dispatch optimized RAID-5 checksumming functions. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2, or (at your option) + * any later version. + * + * You should have received a copy of the GNU General Public License + * (for example /usr/src/linux/COPYING); if not, write to the Free + * Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. + */ + +#define BH_TRACE 0 +#include +#include +#include +#include + +/* The xor routines to use. */ +static struct xor_block_template *active_template; + +void +xor_block(unsigned int count, struct buffer_head **bh_ptr) +{ + unsigned long *p0, *p1, *p2, *p3, *p4; + unsigned long bytes = bh_ptr[0]->b_size; + + p0 = (unsigned long *) bh_ptr[0]->b_data; + p1 = (unsigned long *) bh_ptr[1]->b_data; + if (count == 2) { + active_template->do_2(bytes, p0, p1); + return; + } + + p2 = (unsigned long *) bh_ptr[2]->b_data; + if (count == 3) { + active_template->do_3(bytes, p0, p1, p2); + return; + } + + p3 = (unsigned long *) bh_ptr[3]->b_data; + if (count == 4) { + active_template->do_4(bytes, p0, p1, p2, p3); + return; + } + + p4 = (unsigned long *) bh_ptr[4]->b_data; + active_template->do_5(bytes, p0, p1, p2, p3, p4); +} + +/* Set of all registered templates. */ +static struct xor_block_template *template_list; + +#define BENCH_SIZE (MMUPAGE_SIZE) + +static void +do_xor_speed(struct xor_block_template *tmpl, void *b1, void *b2) +{ + int speed; + unsigned long now; + int i, count, max; + + tmpl->next = template_list; + template_list = tmpl; + + /* + * Count the number of XORs done during a whole jiffy, and use + * this to calculate the speed of checksumming. We use a 2-page + * allocation to have guaranteed color L1-cache layout. + */ + max = 0; + for (i = 0; i < 5; i++) { + now = jiffies; + count = 0; + while (jiffies == now) { + mb(); + tmpl->do_2(BENCH_SIZE, b1, b2); + mb(); + count++; + mb(); + } + if (count > max) + max = count; + } + + speed = max * (HZ * BENCH_SIZE / 1024); + tmpl->speed = speed; + + printk(" %-10s: %5d.%03d MB/sec\n", tmpl->name, + speed / 1000, speed % 1000); +} + +static int +calibrate_xor_block(void) +{ + void *b1, *b2; + struct xor_block_template *f, *fastest; + int order = get_order(4*BENCH_SIZE); + + b1 = (void *) md__get_free_pages(GFP_KERNEL, order); + if (! b1) { + printk("raid5: Yikes! No memory available.\n"); + return -ENOMEM; + } + b2 = b1 + 3*BENCH_SIZE; + + printk(KERN_INFO "raid5: measuring checksumming speed\n"); + sti(); + +#define xor_speed(templ) do_xor_speed((templ), b1, b2) + + XOR_TRY_TEMPLATES; + +#undef xor_speed + + free_pages((unsigned long)b1, 2); + + fastest = template_list; + for (f = fastest; f; f = f->next) + if (f->speed > fastest->speed) + fastest = f; + +#ifdef XOR_SELECT_TEMPLATE + fastest = XOR_SELECT_TEMPLATE(fastest); +#endif + + active_template = fastest; + printk("raid5: using function: %s (%d.%03d MB/sec)\n", + fastest->name, fastest->speed / 1000, fastest->speed % 1000); + + return 0; +} + +MD_EXPORT_SYMBOL(xor_block); + +module_init(calibrate_xor_block); diff -urpN linux-2.4.9-linus/drivers/media/video/bttv-driver.c linux-2.4.9-larpage/drivers/media/video/bttv-driver.c --- linux-2.4.9-linus/drivers/media/video/bttv-driver.c 2001-08-05 13:15:05.000000000 -0700 +++ linux-2.4.9-larpage/drivers/media/video/bttv-driver.c 2002-11-20 02:02:46.000000000 -0800 @@ -127,121 +127,64 @@ __setup("bttv.radio=", p_radio); /* Memory management functions */ /*******************************/ -#define MDEBUG(x) do { } while(0) /* Debug memory management */ - -/* [DaveM] I've recoded most of this so that: - * 1) It's easier to tell what is happening - * 2) It's more portable, especially for translating things - * out of vmalloc mapped areas in the kernel. - * 3) Less unnecessary translations happen. - * - * The code used to assume that the kernel vmalloc mappings - * existed in the page tables of every process, this is simply - * not guarenteed. We now use pgd_offset_k which is the - * defined way to get at the kernel page tables. - */ - -/* Given PGD from the address space's page table, return the kernel - * virtual mapping of the physical memory mapped at ADR. - */ -static inline unsigned long uvirt_to_kva(pgd_t *pgd, unsigned long adr) -{ - unsigned long ret = 0UL; - pmd_t *pmd; - pte_t *ptep, pte; - - if (!pgd_none(*pgd)) { - pmd = pmd_offset(pgd, adr); - if (!pmd_none(*pmd)) { - ptep = pte_offset(pmd, adr); - pte = *ptep; - if(pte_present(pte)) { - ret = (unsigned long) page_address(pte_page(pte)); - ret |= (adr & (PAGE_SIZE - 1)); - - } - } - } - MDEBUG(printk("uv2kva(%lx-->%lx)", adr, ret)); - return ret; -} - -static inline unsigned long uvirt_to_bus(unsigned long adr) -{ - unsigned long kva, ret; - - kva = uvirt_to_kva(pgd_offset(current->mm, adr), adr); - ret = virt_to_bus((void *)kva); - MDEBUG(printk("uv2b(%lx-->%lx)", adr, ret)); - return ret; -} - -static inline unsigned long kvirt_to_bus(unsigned long adr) -{ - unsigned long va, kva, ret; - - va = VMALLOC_VMADDR(adr); - kva = uvirt_to_kva(pgd_offset_k(va), va); - ret = virt_to_bus((void *)kva); - MDEBUG(printk("kv2b(%lx-->%lx)", adr, ret)); - return ret; -} - -/* Here we want the physical address of the memory. - * This is used when initializing the contents of the - * area and marking the pages as reserved. - */ -static inline unsigned long kvirt_to_pa(unsigned long adr) +static void *rvmalloc(unsigned long size) { - unsigned long va, kva, ret; + void *mem; - va = VMALLOC_VMADDR(adr); - kva = uvirt_to_kva(pgd_offset_k(va), va); - ret = __pa(kva); - MDEBUG(printk("kv2pa(%lx-->%lx)", adr, ret)); - return ret; + mem = vmalloc_32(size); + if (mem) { + /* no junk to the user */ + memset(mem, 0, PAGE_ALIGN(size)); + /* no need to reserve until rvmap_page_range */ + } + return mem; } -static void * rvmalloc(signed long size) +static void rvfree(void *mem, unsigned long size) { - void * mem; - unsigned long adr, page; + unsigned long vadr; - mem=vmalloc_32(size); - if (mem) - { - memset(mem, 0, size); /* Clear the ram out, no junk to the user */ - adr=(unsigned long) mem; - while (size > 0) - { - page = kvirt_to_pa(adr); - mem_map_reserve(virt_to_page(__va(page))); - adr+=PAGE_SIZE; - size-=PAGE_SIZE; + if (mem) { + vadr = (unsigned long) mem; + while ((long) size > 0) { + ClearPageReserved(vvirt_to_page(vadr)); + vadr += PAGE_SIZE; + size -= PAGE_SIZE; } + vfree(mem); } - return mem; } -static void rvfree(void * mem, signed long size) +static inline int rvmap_page_range(const char *uadr, void *mem, + unsigned long size, pgprot_t prot) { - unsigned long adr, page; - - if (mem) - { - adr=(unsigned long) mem; - while (size > 0) - { - page = kvirt_to_pa(adr); - mem_map_unreserve(virt_to_page(__va(page))); - adr+=PAGE_SIZE; - size-=PAGE_SIZE; - } - vfree(mem); + struct page *page; + unsigned long padr; + unsigned long unit = PAGE_SIZE; + + while ((long) size > 0) { + if (unit > size) + unit = size; + page = vvirt_to_page((unsigned long)mem); + SetPageReserved(page); + padr = __pa(page_address(page)); + if (remap_page_range((unsigned long)uadr, padr, unit, prot)) + return -EAGAIN; + uadr += PAGE_SIZE; + mem += PAGE_SIZE; + size -= PAGE_SIZE; } + return 0; } +static inline unsigned long kvirt_to_bus(unsigned long vadr) +{ + unsigned long kadr; + kadr = (unsigned long) page_address(vvirt_to_page(vadr)) + + (vadr & ~PAGE_MASK); + return virt_to_bus((void *) kadr); +} /* * Create the giant waste of buffer space we need for now @@ -683,9 +626,9 @@ static int make_prisctab(struct bttv *b todo=width; while(todo) { - bl=PAGE_SIZE-((PAGE_SIZE-1)&vadr); - blcr=(PAGE_SIZE-((PAGE_SIZE-1)&cradr))<todo) ? todo : bl; @@ -760,7 +703,7 @@ static int make_vrisctab(struct bttv *b else rp= (line>=height) ? &ro : &re; - bl=PAGE_SIZE-((PAGE_SIZE-1)&vadr); + bl=MMUPAGE_SIZE-((MMUPAGE_SIZE-1)&vadr); if (bpl<=bl) { *((*rp)++)=cpu_to_le32(BT848_RISC_WRITE|BT848_RISC_SOL| @@ -775,12 +718,12 @@ static int make_vrisctab(struct bttv *b *((*rp)++)=cpu_to_le32(kvirt_to_bus(vadr)); vadr+=bl; todo-=bl; - while (todo>PAGE_SIZE) + while (todo>MMUPAGE_SIZE) { - *((*rp)++)=cpu_to_le32(BT848_RISC_WRITE|PAGE_SIZE); + *((*rp)++)=cpu_to_le32(BT848_RISC_WRITE|MMUPAGE_SIZE); *((*rp)++)=cpu_to_le32(kvirt_to_bus(vadr)); - vadr+=PAGE_SIZE; - todo-=PAGE_SIZE; + vadr+=MMUPAGE_SIZE; + todo-=MMUPAGE_SIZE; } *((*rp)++)=cpu_to_le32(BT848_RISC_WRITE|BT848_RISC_EOL|todo); *((*rp)++)=cpu_to_le32(kvirt_to_bus(vadr)); @@ -2022,34 +1965,17 @@ static int bttv_ioctl(struct video_devic /* * This maps the vmalloced and reserved fbuffer to user space. - * - * FIXME: - * - PAGE_READONLY should suffice!? - * - remap_page_range is kind of inefficient for page by page remapping. - * But e.g. pte_alloc() does not work in modules ... :-( */ static int do_bttv_mmap(struct bttv *btv, const char *adr, unsigned long size) { - unsigned long start=(unsigned long) adr; - unsigned long page,pos; - if (size>gbuffers*gbufsize) return -EINVAL; if (!btv->fbuffer) { if(fbuffer_alloc(btv)) return -EINVAL; } - pos=(unsigned long) btv->fbuffer; - while (size > 0) { - page = kvirt_to_pa(pos); - if (remap_page_range(start, page, PAGE_SIZE, PAGE_SHARED)) - return -EAGAIN; - start+=PAGE_SIZE; - pos+=PAGE_SIZE; - size-=PAGE_SIZE; - } - return 0; + return rvmap_page_range(adr, btv->fbuffer, size, PAGE_SHARED); } static int bttv_mmap(struct video_device *dev, const char *adr, unsigned long size) diff -urpN linux-2.4.9-linus/drivers/media/video/bttv-driver.c.orig linux-2.4.9-larpage/drivers/media/video/bttv-driver.c.orig --- linux-2.4.9-linus/drivers/media/video/bttv-driver.c.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/drivers/media/video/bttv-driver.c.orig 2002-11-20 02:02:46.000000000 -0800 @@ -0,0 +1,2997 @@ +/* + bttv - Bt848 frame grabber driver + + Copyright (C) 1996,97,98 Ralph Metzler (rjkm@thp.uni-koeln.de) + & Marcus Metzler (mocm@thp.uni-koeln.de) + (c) 1999,2000 Gerd Knorr + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +*/ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "bttvp.h" +#include "tuner.h" + +#define DEBUG(x) /* Debug driver */ +#define MIN(a,b) (((a)>(b))?(b):(a)) +#define MAX(a,b) (((a)>(b))?(a):(b)) + +static void bt848_set_risc_jmps(struct bttv *btv, int state); + +int bttv_num; /* number of Bt848s in use */ +struct bttv bttvs[BTTV_MAX]; + +/* configuration variables */ +#if defined(__sparc__) || defined(__powerpc__) || defined(__hppa__) +static unsigned int bigendian=1; +#else +static unsigned int bigendian=0; +#endif +static unsigned int radio[BTTV_MAX]; +static unsigned int fieldnr = 0; +static unsigned int irq_debug = 0; +static unsigned int gbuffers = 2; +static unsigned int gbufsize = BTTV_MAX_FBUF; +static unsigned int combfilter = 0; +static unsigned int lumafilter = 0; +static int video_nr = -1; +static int radio_nr = -1; +static int vbi_nr = -1; +unsigned int bttv_debug = 0; +unsigned int bttv_verbose = 1; +unsigned int bttv_gpio = 0; + +/* insmod options */ +MODULE_PARM(radio,"1-4i"); +MODULE_PARM_DESC(radio,"The TV card supports radio, default is 0 (no)"); +MODULE_PARM(bigendian,"i"); +MODULE_PARM_DESC(bigendian,"byte order of the framebuffer, default is native endian"); +MODULE_PARM(fieldnr,"i"); +MODULE_PARM_DESC(fieldnr,"count fields, default is 0 (no)"); +MODULE_PARM(bttv_verbose,"i"); +MODULE_PARM_DESC(bttv_verbose,"verbose startup messages, default is 1 (yes)"); +MODULE_PARM(bttv_gpio,"i"); +MODULE_PARM_DESC(bttv_gpio,"log gpio changes, default is 0 (no)"); +MODULE_PARM(bttv_debug,"i"); +MODULE_PARM_DESC(bttv_debug,"debug messages, default is 0 (no)"); +MODULE_PARM(irq_debug,"i"); +MODULE_PARM_DESC(irq_debug,"irq handler debug messages, default is 0 (no)"); +MODULE_PARM(gbuffers,"i"); +MODULE_PARM_DESC(gbuffers,"number of capture buffers, default is 2 (64 max)"); +MODULE_PARM(gbufsize,"i"); +MODULE_PARM_DESC(gbufsize,"size of the capture buffers, default is 0x208000"); +MODULE_PARM(combfilter,"i"); +MODULE_PARM(lumafilter,"i"); + +MODULE_PARM(video_nr,"i"); +MODULE_PARM(radio_nr,"i"); +MODULE_PARM(vbi_nr,"i"); + +MODULE_DESCRIPTION("bttv - v4l driver module for bt848/878 based cards"); +MODULE_AUTHOR("Ralph Metzler & Marcus Metzler & Gerd Knorr"); + +/* kernel args */ +#ifndef MODULE +static int __init p_radio(char *str) { return bttv_parse(str,BTTV_MAX,radio); } +__setup("bttv.radio=", p_radio); +#endif + +#define I2C_TIMING (0x7<<4) +#define I2C_DELAY 10 + +#define I2C_SET(CTRL,DATA) \ + { btwrite((CTRL<<1)|(DATA), BT848_I2C); udelay(I2C_DELAY); } +#define I2C_GET() (btread(BT848_I2C)&1) + +#define BURSTOFFSET 76 +#define BTTV_ERRORS 5 + + +/*******************************/ +/* Memory management functions */ +/*******************************/ + +static void *rvmalloc(unsigned long size) +{ + void *mem; + + mem = vmalloc_32(size); + if (mem) { + /* no junk to the user */ + memset(mem, 0, PAGE_ALIGN(size)); + /* no need to reserve until rvmap_page_range */ + } + return mem; +} + +static void rvfree(void *mem, unsigned long size) +{ + unsigned long vadr; + + if (mem) { + vadr = (unsigned long) mem; + while ((long) size > 0) { + ClearPageReserved(vvirt_to_page(vadr)); + vadr += PAGE_SIZE; + size -= PAGE_SIZE; + } + vfree(mem); + } +} + +static inline int rvmap_page_range(const char *uadr, void *mem, + unsigned long size, pgprot_t prot) +{ + struct page *page; + unsigned long padr; + unsigned long unit = PAGE_SIZE; + + while ((long) size > 0) { + if (unit > size) + unit = size; + page = vvirt_to_page((unsigned long)mem); + SetPageReserved(page); + padr = __pa(page_address(page)); + if (remap_page_range((unsigned long)uadr, padr, unit, prot)) + return -EAGAIN; + uadr += PAGE_SIZE; + mem += PAGE_SIZE; + size -= PAGE_SIZE; + } + return 0; +} + +static inline unsigned long kvirt_to_bus(unsigned long vadr) +{ + unsigned long kadr; + + kadr = (unsigned long) page_address(vvirt_to_page(vadr)) + + (vadr & ~PAGE_MASK); + return virt_to_bus((void *) kadr); +} + +/* + * Create the giant waste of buffer space we need for now + * until we get DMA to user space sorted out (probably 2.3.x) + * + * We only create this as and when someone uses mmap + */ + +static int fbuffer_alloc(struct bttv *btv) +{ + if(!btv->fbuffer) + btv->fbuffer=(unsigned char *) rvmalloc(gbuffers*gbufsize); + else + printk(KERN_ERR "bttv%d: Double alloc of fbuffer!\n", + btv->nr); + if(!btv->fbuffer) + return -ENOBUFS; + return 0; +} + +/* ----------------------------------------------------------------------- */ + +void bttv_gpio_tracking(struct bttv *btv, char *comment) +{ + unsigned int outbits, data; + outbits = btread(BT848_GPIO_OUT_EN); + data = btread(BT848_GPIO_DATA); + printk(KERN_DEBUG "bttv%d: gpio: en=%08x, out=%08x in=%08x [%s]\n", + btv->nr,outbits,data & outbits, data & ~outbits, comment); +} + +static char *audio_modes[] = { "audio: tuner", "audio: radio", "audio: extern", + "audio: intern", "audio: off" }; + +static void audio(struct bttv *btv, int mode, int no_irq_context) +{ + btaor(bttv_tvcards[btv->type].gpiomask, ~bttv_tvcards[btv->type].gpiomask, + BT848_GPIO_OUT_EN); + + switch (mode) + { + case AUDIO_MUTE: + btv->audio|=AUDIO_MUTE; + break; + case AUDIO_UNMUTE: + btv->audio&=~AUDIO_MUTE; + mode=btv->audio; + break; + case AUDIO_OFF: + mode=AUDIO_OFF; + break; + case AUDIO_ON: + mode=btv->audio; + break; + default: + btv->audio&=AUDIO_MUTE; + btv->audio|=mode; + break; + } + /* if audio mute or not in H-lock, turn audio off */ + if ((btv->audio&AUDIO_MUTE)) + mode=AUDIO_OFF; + if ((mode == AUDIO_TUNER) && (btv->radio)) + mode = AUDIO_RADIO; + btaor(bttv_tvcards[btv->type].audiomux[mode], + ~bttv_tvcards[btv->type].gpiomask, BT848_GPIO_DATA); + if (bttv_gpio) + bttv_gpio_tracking(btv,audio_modes[mode]); + if (no_irq_context) + bttv_call_i2c_clients(btv,AUDC_SET_INPUT,&(mode)); +} + + +extern inline void bt848_dma(struct bttv *btv, uint state) +{ + if (state) + btor(3, BT848_GPIO_DMA_CTL); + else + btand(~3, BT848_GPIO_DMA_CTL); +} + + +/* If Bt848a or Bt849, use PLL for PAL/SECAM and crystal for NTSC*/ + +/* Frequency = (F_input / PLL_X) * PLL_I.PLL_F/PLL_C + PLL_X = Reference pre-divider (0=1, 1=2) + PLL_C = Post divider (0=6, 1=4) + PLL_I = Integer input + PLL_F = Fractional input + + F_input = 28.636363 MHz: + PAL (CLKx2 = 35.46895 MHz): PLL_X = 1, PLL_I = 0x0E, PLL_F = 0xDCF9, PLL_C = 0 +*/ + +static void set_pll_freq(struct bttv *btv, unsigned int fin, unsigned int fout) +{ + unsigned char fl, fh, fi; + + /* prevent overflows */ + fin/=4; + fout/=4; + + fout*=12; + fi=fout/fin; + + fout=(fout%fin)*256; + fh=fout/fin; + + fout=(fout%fin)*256; + fl=fout/fin; + + /*printk("0x%02x 0x%02x 0x%02x\n", fi, fh, fl);*/ + btwrite(fl, BT848_PLL_F_LO); + btwrite(fh, BT848_PLL_F_HI); + btwrite(fi|BT848_PLL_X, BT848_PLL_XCI); +} + +static int set_pll(struct bttv *btv) +{ + int i; + unsigned long tv; + + if (!btv->pll.pll_crystal) + return 0; + + if (btv->pll.pll_ifreq == btv->pll.pll_ofreq) { + /* no PLL needed */ + if (btv->pll.pll_current == 0) { + /* printk ("bttv%d: PLL: is off\n",btv->nr); */ + return 0; + } + if (bttv_verbose) + printk ("bttv%d: PLL: switching off\n",btv->nr); + btwrite(0x00,BT848_TGCTRL); + btwrite(0x00,BT848_PLL_XCI); + btv->pll.pll_current = 0; + return 0; + } + + if (btv->pll.pll_ofreq == btv->pll.pll_current) { + /* printk("bttv%d: PLL: no change required\n",btv->nr); */ + return 1; + } + + if (bttv_verbose) + printk("bttv%d: PLL: %d => %d ... ",btv->nr, + btv->pll.pll_ifreq, btv->pll.pll_ofreq); + + set_pll_freq(btv, btv->pll.pll_ifreq, btv->pll.pll_ofreq); + + /* Let other people run while the PLL stabilizes */ + tv=jiffies+HZ/10; /* .1 seconds */ + do + { + schedule(); + } + while(time_before(jiffies,tv)); + + for (i=0; i<100; i++) + { + if ((btread(BT848_DSTATUS)&BT848_DSTATUS_PLOCK)) + btwrite(0,BT848_DSTATUS); + else + { + btwrite(0x08,BT848_TGCTRL); + btv->pll.pll_current = btv->pll.pll_ofreq; + if (bttv_verbose) + printk("ok\n"); + return 1; + } + mdelay(10); + } + btv->pll.pll_current = 0; + if (bttv_verbose) + printk("oops\n"); + return -1; +} + +static void bt848_muxsel(struct bttv *btv, unsigned int input) +{ + +#if 0 /* seems no card uses this ... */ + btaor(bttv_tvcards[btv->type].gpiomask2,~bttv_tvcards[btv->type].gpiomask2, + BT848_GPIO_OUT_EN); +#endif + + /* This seems to get rid of some synchronization problems */ + btand(~(3<<5), BT848_IFORM); + mdelay(10); + + input %= bttv_tvcards[btv->type].video_inputs; + if (input==bttv_tvcards[btv->type].svhs) + { + btor(BT848_CONTROL_COMP, BT848_E_CONTROL); + btor(BT848_CONTROL_COMP, BT848_O_CONTROL); + } + else + { + btand(~BT848_CONTROL_COMP, BT848_E_CONTROL); + btand(~BT848_CONTROL_COMP, BT848_O_CONTROL); + } + btaor((bttv_tvcards[btv->type].muxsel[input&7]&3)<<5, ~(3<<5), BT848_IFORM); + audio(btv, (input!=bttv_tvcards[btv->type].tuner) ? + AUDIO_EXTERN : AUDIO_TUNER, 1); + +#if 0 /* seems no card uses this ... */ + btaor(bttv_tvcards[btv->type].muxsel[input]>>4, + ~bttv_tvcards[btv->type].gpiomask2, BT848_GPIO_DATA); + if (bttv_gpio) + bttv_gpio_tracking(btv,"muxsel"); +#endif +} + + +struct tvnorm +{ + u32 Fsc; + u16 swidth, sheight; /* scaled standard width, height */ + u16 totalwidth; + u8 adelay, bdelay, iform; + u32 scaledtwidth; + u16 hdelayx1, hactivex1; + u16 vdelay; + u8 vbipack; +}; + +static struct tvnorm tvnorms[] = { + /* PAL-BDGHI */ + /* max. active video is actually 922, but 924 is divisible by 4 and 3! */ + /* actually, max active PAL with HSCALE=0 is 948, NTSC is 768 - nil */ + { 35468950, + 924, 576, 1135, 0x7f, 0x72, (BT848_IFORM_PAL_BDGHI|BT848_IFORM_XT1), + 1135, 186, 924, 0x20, 255}, + + /* NTSC */ + { 28636363, + 768, 480, 910, 0x68, 0x5d, (BT848_IFORM_NTSC|BT848_IFORM_XT0), + 910, 128, 910, 0x1a, 144}, +#if 0 + /* SECAM EAST */ + { 35468950, + 768, 576, 1135, 0x7f, 0xb0, (BT848_IFORM_SECAM|BT848_IFORM_XT1), + 944, 186, 922, 0x20, 255}, +#else + /* SECAM L */ + { 35468950, + 924, 576, 1135, 0x7f, 0xb0, (BT848_IFORM_SECAM|BT848_IFORM_XT1), + 1135, 186, 922, 0x20, 255}, +#endif + /* PAL-NC */ + { 28636363, + 640, 576, 910, 0x68, 0x5d, (BT848_IFORM_PAL_NC|BT848_IFORM_XT0), + 780, 130, 734, 0x1a, 144}, + /* PAL-M */ + { 28636363, + 640, 480, 910, 0x68, 0x5d, (BT848_IFORM_PAL_M|BT848_IFORM_XT0), + 780, 135, 754, 0x1a, 144}, + /* PAL-N */ + { 35468950, + 768, 576, 1135, 0x7f, 0x72, (BT848_IFORM_PAL_N|BT848_IFORM_XT1), + 944, 186, 922, 0x20, 144}, + /* NTSC-Japan */ + { 28636363, + 640, 480, 910, 0x68, 0x5d, (BT848_IFORM_NTSC_J|BT848_IFORM_XT0), + 780, 135, 754, 0x16, 144}, +}; +#define TVNORMS (sizeof(tvnorms)/sizeof(tvnorm)) +#define VBI_SPL 2044 + +/* RISC command to write one VBI data line */ +#define VBI_RISC BT848_RISC_WRITE|VBI_SPL|BT848_RISC_EOL|BT848_RISC_SOL + +static void make_vbitab(struct bttv *btv) +{ + int i; + unsigned int *po=(unsigned int *) btv->vbi_odd; + unsigned int *pe=(unsigned int *) btv->vbi_even; + + if (bttv_debug > 1) + printk("bttv%d: vbi1: po=%08lx pe=%08lx\n", + btv->nr,virt_to_bus(po), virt_to_bus(pe)); + + *(po++)=cpu_to_le32(BT848_RISC_SYNC|BT848_FIFO_STATUS_FM1); *(po++)=0; + for (i=0; i<16; i++) + { + *(po++)=cpu_to_le32(VBI_RISC); + *(po++)=cpu_to_le32(kvirt_to_bus((unsigned long)btv->vbibuf+i*2048)); + } + *(po++)=cpu_to_le32(BT848_RISC_JUMP); + *(po++)=cpu_to_le32(virt_to_bus(btv->risc_jmp+4)); + + *(pe++)=cpu_to_le32(BT848_RISC_SYNC|BT848_FIFO_STATUS_FM1); *(pe++)=0; + for (i=16; i<32; i++) + { + *(pe++)=cpu_to_le32(VBI_RISC); + *(pe++)=cpu_to_le32(kvirt_to_bus((unsigned long)btv->vbibuf+i*2048)); + } + *(pe++)=cpu_to_le32(BT848_RISC_JUMP|BT848_RISC_IRQ|(0x01<<16)); + *(pe++)=cpu_to_le32(virt_to_bus(btv->risc_jmp+10)); + + if (bttv_debug > 1) + printk("bttv%d: vbi2: po=%08lx pe=%08lx\n", + btv->nr,virt_to_bus(po), virt_to_bus(pe)); +} + +static int fmtbppx2[16] = { + 8, 6, 4, 4, 4, 3, 2, 2, 4, 3, 0, 0, 0, 0, 2, 0 +}; + +static int palette2fmt[] = { + 0, + BT848_COLOR_FMT_Y8, + BT848_COLOR_FMT_RGB8, + BT848_COLOR_FMT_RGB16, + BT848_COLOR_FMT_RGB24, + BT848_COLOR_FMT_RGB32, + BT848_COLOR_FMT_RGB15, + BT848_COLOR_FMT_YUY2, + BT848_COLOR_FMT_BtYUV, + -1, + -1, + -1, + BT848_COLOR_FMT_RAW, + BT848_COLOR_FMT_YCrCb422, + BT848_COLOR_FMT_YCrCb411, + BT848_COLOR_FMT_YCrCb422, + BT848_COLOR_FMT_YCrCb411, +}; +#define PALETTEFMT_MAX (sizeof(palette2fmt)/sizeof(int)) + +static int make_rawrisctab(struct bttv *btv, unsigned int *ro, + unsigned int *re, unsigned int *vbuf) +{ + unsigned long line; + unsigned long bpl=1024; /* bytes per line */ + unsigned long vadr=(unsigned long) vbuf; + + *(ro++)=cpu_to_le32(BT848_RISC_SYNC|BT848_FIFO_STATUS_FM1); + *(ro++)=cpu_to_le32(0); + *(re++)=cpu_to_le32(BT848_RISC_SYNC|BT848_FIFO_STATUS_FM1); + *(re++)=cpu_to_le32(0); + + /* In PAL 650 blocks of 256 DWORDs are sampled, but only if VDELAY + is 2 and without separate VBI grabbing. + We'll have to handle this inside the IRQ handler ... */ + + for (line=0; line < 640; line++) + { + *(ro++)=cpu_to_le32(BT848_RISC_WRITE|bpl|BT848_RISC_SOL|BT848_RISC_EOL); + *(ro++)=cpu_to_le32(kvirt_to_bus(vadr)); + *(re++)=cpu_to_le32(BT848_RISC_WRITE|bpl|BT848_RISC_SOL|BT848_RISC_EOL); + *(re++)=cpu_to_le32(kvirt_to_bus(vadr+gbufsize/2)); + vadr+=bpl; + } + + *(ro++)=cpu_to_le32(BT848_RISC_JUMP); + *(ro++)=cpu_to_le32(btv->bus_vbi_even); + *(re++)=cpu_to_le32(BT848_RISC_JUMP|BT848_RISC_IRQ|(2<<16)); + *(re++)=cpu_to_le32(btv->bus_vbi_odd); + + return 0; +} + +static int make_prisctab(struct bttv *btv, unsigned int *ro, + unsigned int *re, + unsigned int *vbuf, unsigned short width, + unsigned short height, unsigned short fmt) +{ + unsigned long line, lmask; + unsigned long bl, blcr, blcb, rcmd; + unsigned long todo; + unsigned int **rp; + int inter; + unsigned long cbadr, cradr; + unsigned long vadr=(unsigned long) vbuf; + int shift, csize; + + if (bttv_debug > 1) + printk("bttv%d: prisc1: ro=%08lx re=%08lx\n", + btv->nr,virt_to_bus(ro), virt_to_bus(re)); + + switch(fmt) + { + case VIDEO_PALETTE_YUV422P: + csize=(width*height)>>1; + shift=1; + lmask=0; + break; + + case VIDEO_PALETTE_YUV411P: + csize=(width*height)>>2; + shift=2; + lmask=0; + break; + + case VIDEO_PALETTE_YUV420P: + csize=(width*height)>>2; + shift=1; + lmask=1; + break; + + case VIDEO_PALETTE_YUV410P: + csize=(width*height)>>4; + shift=2; + lmask=3; + break; + + default: + return -1; + } + cbadr=vadr+(width*height); + cradr=cbadr+csize; + inter = (height>tvnorms[btv->win.norm].sheight/2) ? 1 : 0; + + *(ro++)=cpu_to_le32(BT848_RISC_SYNC|BT848_FIFO_STATUS_FM3); + *(ro++)=0; + *(re++)=cpu_to_le32(BT848_RISC_SYNC|BT848_FIFO_STATUS_FM3); + *(re++)=0; + + for (line=0; line < (height<<(1^inter)); line++) + { + if(line==height) + { + vadr+=csize<<1; + cbadr=vadr+(width*height); + cradr=cbadr+csize; + } + if (inter) + rp= (line&1) ? &re : &ro; + else + rp= (line>=height) ? &ro : &re; + + + if(line&lmask) + rcmd=BT848_RISC_WRITE1S23|BT848_RISC_SOL; + else + rcmd=BT848_RISC_WRITE123|BT848_RISC_SOL; + + todo=width; + while(todo) + { + bl=MMUPAGE_SIZE-((MMUPAGE_SIZE-1)&vadr); + blcr=(MMUPAGE_SIZE-((MMUPAGE_SIZE-1)&cradr))<todo) ? todo : bl; + blcr=bl>>shift; + blcb=blcr; + /* bl now containts the longest row that can be written */ + todo-=bl; + if(!todo) rcmd|=BT848_RISC_EOL; /* if this is the last EOL */ + + *((*rp)++)=cpu_to_le32(rcmd|bl); + *((*rp)++)=cpu_to_le32(blcb|(blcr<<16)); + *((*rp)++)=cpu_to_le32(kvirt_to_bus(vadr)); + vadr+=bl; + if((rcmd&(15<<28))==BT848_RISC_WRITE123) + { + *((*rp)++)=cpu_to_le32(kvirt_to_bus(cbadr)); + cbadr+=blcb; + *((*rp)++)=cpu_to_le32(kvirt_to_bus(cradr)); + cradr+=blcr; + } + + rcmd&=~BT848_RISC_SOL; /* only the first has SOL */ + } + } + + *(ro++)=cpu_to_le32(BT848_RISC_JUMP); + *(ro++)=cpu_to_le32(btv->bus_vbi_even); + *(re++)=cpu_to_le32(BT848_RISC_JUMP|BT848_RISC_IRQ|(2<<16)); + *(re++)=cpu_to_le32(btv->bus_vbi_odd); + + if (bttv_debug > 1) + printk("bttv%d: prisc2: ro=%08lx re=%08lx\n", + btv->nr,virt_to_bus(ro), virt_to_bus(re)); + + return 0; +} + +static int make_vrisctab(struct bttv *btv, unsigned int *ro, + unsigned int *re, + unsigned int *vbuf, unsigned short width, + unsigned short height, unsigned short palette) +{ + unsigned long line; + unsigned long bpl; /* bytes per line */ + unsigned long bl; + unsigned long todo; + unsigned int **rp; + int inter; + unsigned long vadr=(unsigned long) vbuf; + + if (palette==VIDEO_PALETTE_RAW) + return make_rawrisctab(btv, ro, re, vbuf); + if (palette>=VIDEO_PALETTE_PLANAR) + return make_prisctab(btv, ro, re, vbuf, width, height, palette); + + if (bttv_debug > 1) + printk("bttv%d: vrisc1: ro=%08lx re=%08lx\n", + btv->nr,virt_to_bus(ro), virt_to_bus(re)); + + inter = (height>tvnorms[btv->win.norm].sheight/2) ? 1 : 0; + bpl=width*fmtbppx2[palette2fmt[palette]&0xf]/2; + + *(ro++)=cpu_to_le32(BT848_RISC_SYNC|BT848_FIFO_STATUS_FM1); + *(ro++)=cpu_to_le32(0); + *(re++)=cpu_to_le32(BT848_RISC_SYNC|BT848_FIFO_STATUS_FM1); + *(re++)=cpu_to_le32(0); + + for (line=0; line < (height<<(1^inter)); line++) + { + if (inter) + rp= (line&1) ? &re : &ro; + else + rp= (line>=height) ? &ro : &re; + + bl=MMUPAGE_SIZE-((MMUPAGE_SIZE-1)&vadr); + if (bpl<=bl) + { + *((*rp)++)=cpu_to_le32(BT848_RISC_WRITE|BT848_RISC_SOL| + BT848_RISC_EOL|bpl); + *((*rp)++)=cpu_to_le32(kvirt_to_bus(vadr)); + vadr+=bpl; + } + else + { + todo=bpl; + *((*rp)++)=cpu_to_le32(BT848_RISC_WRITE|BT848_RISC_SOL|bl); + *((*rp)++)=cpu_to_le32(kvirt_to_bus(vadr)); + vadr+=bl; + todo-=bl; + while (todo>MMUPAGE_SIZE) + { + *((*rp)++)=cpu_to_le32(BT848_RISC_WRITE|MMUPAGE_SIZE); + *((*rp)++)=cpu_to_le32(kvirt_to_bus(vadr)); + vadr+=MMUPAGE_SIZE; + todo-=MMUPAGE_SIZE; + } + *((*rp)++)=cpu_to_le32(BT848_RISC_WRITE|BT848_RISC_EOL|todo); + *((*rp)++)=cpu_to_le32(kvirt_to_bus(vadr)); + vadr+=todo; + } + } + + *(ro++)=cpu_to_le32(BT848_RISC_JUMP); + *(ro++)=cpu_to_le32(btv->bus_vbi_even); + *(re++)=cpu_to_le32(BT848_RISC_JUMP|BT848_RISC_IRQ|(2<<16)); + *(re++)=cpu_to_le32(btv->bus_vbi_odd); + + if (bttv_debug > 1) + printk("bttv%d: vrisc2: ro=%08lx re=%08lx\n", + btv->nr,virt_to_bus(ro), virt_to_bus(re)); + + return 0; +} + +static unsigned char lmaskt[8] = +{ 0xff, 0xfe, 0xfc, 0xf8, 0xf0, 0xe0, 0xc0, 0x80}; +static unsigned char rmaskt[8] = +{ 0x01, 0x03, 0x07, 0x0f, 0x1f, 0x3f, 0x7f, 0xff}; + +static void clip_draw_rectangle(unsigned char *clipmap, int x, int y, int w, int h) +{ + unsigned char lmask, rmask, *p; + int W, l, r; + int i; + + if (bttv_debug > 1) + printk("bttv clip: %dx%d+%d+%d\n",w,h,x,y); + + /* bitmap is fixed width, 128 bytes (1024 pixels represented) */ + if (x<0) + { + w+=x; + x=0; + } + if (y<0) + { + h+=y; + y=0; + } + if (w < 0 || h < 0) /* catch bad clips */ + return; + /* out of range data should just fall through */ + if (y+h>=625) + h=625-y; + if (x+w>=1024) + w=1024-x; + + l=x>>3; + r=(x+w-1)>>3; + W=r-l-1; + lmask=lmaskt[x&7]; + rmask=rmaskt[(x+w-1)&7]; + p=clipmap+128*y+l; + + if (W>0) + { + for (i=0; iwin.bpp is allowed here */ + bpp = fmtbppx2[btv->win.color_fmt&0xf]/2; + bpl=btv->win.bpl; + adr=btv->win.vidadr + btv->win.x * btv->win.bpp + btv->win.y * bpl; + inter=(btv->win.interlace&1)^1; + width=btv->win.width; + height=btv->win.height; + if (bttv_debug > 1) + printk("bttv%d: clip1: pal=%d size=%dx%d, bpl=%d bpp=%d\n", + btv->nr,btv->picture.palette,width,height,bpl,bpp); + if(width > 1023) + width = 1023; /* sanity check */ + if(height > 625) + height = 625; /* sanity check */ + ro=btv->risc_scr_odd; + re=btv->risc_scr_even; + + if (bttv_debug) + printk("bttv%d: clip: ro=%08lx re=%08lx\n", + btv->nr,virt_to_bus(ro), virt_to_bus(re)); + + if ((clipmap=vmalloc(VIDEO_CLIPMAP_SIZE))==NULL) { + /* can't clip, don't generate any risc code */ + *(ro++)=cpu_to_le32(BT848_RISC_JUMP); + *(ro++)=cpu_to_le32(btv->bus_vbi_even); + *(re++)=cpu_to_le32(BT848_RISC_JUMP); + *(re++)=cpu_to_le32(btv->bus_vbi_odd); + } + if (ncr < 0) { /* bitmap was pased */ + memcpy(clipmap, (unsigned char *)cr, VIDEO_CLIPMAP_SIZE); + } else { /* convert rectangular clips to a bitmap */ + memset(clipmap, 0, VIDEO_CLIPMAP_SIZE); /* clear map */ + for (i=0; iwin.x * btv->win.bpp) / bpp; + clip_draw_rectangle(clipmap, (width > maxw) ? maxw : width, + 0, 1024, 768); + clip_draw_rectangle(clipmap,0,(btv->win.y+height>btv->win.sheight) ? + (btv->win.sheight-btv->win.y) : height,1024,768); + if (btv->win.x<0) + clip_draw_rectangle(clipmap, 0, 0, -(btv->win.x), 768); + if (btv->win.y<0) + clip_draw_rectangle(clipmap, 0, 0, 1024, -(btv->win.y)); + + *(ro++)=cpu_to_le32(BT848_RISC_SYNC|BT848_FIFO_STATUS_FM1); + *(ro++)=cpu_to_le32(0); + *(re++)=cpu_to_le32(BT848_RISC_SYNC|BT848_FIFO_STATUS_FM1); + *(re++)=cpu_to_le32(0); + + /* translate bitmap to risc code */ + for (line=outofmem=0; line < (height<>inter; + rp= (line&1) ? &re : &ro; + clipline = clipmap + (y<<7); /* running pointers ... */ + lastbit = *clipline & 1; + for(x=dx=0,sx=0; x<=width && !outofmem;) { + if (0 == (x&7)) { + /* check bytes not bits if we can ... */ + if (lastbit) { + while (0xff==*clipline && xrisc_scr_odd>(RISCMEM_LEN>>3) - 16) + outofmem++; + if (re - btv->risc_scr_even>(RISCMEM_LEN>>3) - 16) + outofmem++; + } + x++; + if (0 == (x&7)) + clipline++; + } + if ((!inter)||(line&1)) + adr+=bpl; + } + + vfree(clipmap); + /* outofmem flag relies on the following code to discard extra data */ + *(ro++)=cpu_to_le32(BT848_RISC_JUMP); + *(ro++)=cpu_to_le32(btv->bus_vbi_even); + *(re++)=cpu_to_le32(BT848_RISC_JUMP); + *(re++)=cpu_to_le32(btv->bus_vbi_odd); + + if (bttv_debug > 1) + printk("bttv%d: clip2: pal=%d size=%dx%d, bpl=%d bpp=%d\n", + btv->nr,btv->picture.palette,width,height,bpl,bpp); +} + +/* + * Set the registers for the size we have specified. Don't bother + * trying to understand this without the BT848 manual in front of + * you [AC]. + * + * PS: The manual is free for download in .pdf format from + * www.brooktree.com - nicely done those folks. + */ + +static inline void bt848_set_eogeo(struct bttv *btv, struct tvnorm *tvn, + int odd, int width, int height) +{ + u16 vscale, hscale; + u32 xsf, sr; + u16 hdelay; + u8 crop, vtc; + int inter = (height>tvn->sheight/2) ? 0 : 1; + int off = odd ? 0x80 : 0x00; + + xsf = (width*tvn->scaledtwidth)/tvn->swidth; + hscale = ((tvn->totalwidth*4096UL)/xsf-4096); + hdelay = tvn->hdelayx1; + hdelay = (hdelay*width)/tvn->swidth; + hdelay &= 0x3fe; + sr=((tvn->sheight>>inter)*512)/height-512; + vscale=(0x10000UL-sr)&0x1fff; + crop=((width>>8)&0x03)|((hdelay>>6)&0x0c)| + ((tvn->sheight>>4)&0x30)|((tvn->vdelay>>2)&0xc0); + vscale |= inter ? (BT848_VSCALE_INT<<8) : 0; + + if (combfilter) { + /* Some people say interpolation looks bad ... */ + vtc = (width < 193) ? 2 : ((width < 385) ? 1 : 0); + if (width < 769) + btor(BT848_VSCALE_COMB, BT848_E_VSCALE_HI+off); + else + btand(~BT848_VSCALE_COMB, BT848_E_VSCALE_HI+off); + } else { + vtc = 0; + btand(~BT848_VSCALE_COMB, BT848_E_VSCALE_HI+off); + } + + btwrite(vtc, BT848_E_VTC+off); + btwrite(hscale>>8, BT848_E_HSCALE_HI+off); + btwrite(hscale&0xff, BT848_E_HSCALE_LO+off); + btaor((vscale>>8), 0xe0, BT848_E_VSCALE_HI+off); + btwrite(vscale&0xff, BT848_E_VSCALE_LO+off); + btwrite(width&0xff, BT848_E_HACTIVE_LO+off); + btwrite(hdelay&0xff, BT848_E_HDELAY_LO+off); + btwrite(tvn->sheight&0xff, BT848_E_VACTIVE_LO+off); + btwrite(tvn->vdelay&0xff, BT848_E_VDELAY_LO+off); + btwrite(crop, BT848_E_CROP+off); +} + + +static void bt848_set_geo(struct bttv *btv, + int no_irq_context) +{ + u16 ewidth, eheight, owidth, oheight; + u16 format, bswap; + struct tvnorm *tvn; + + tvn=&tvnorms[btv->win.norm]; + + btwrite(tvn->adelay, BT848_ADELAY); + btwrite(tvn->bdelay, BT848_BDELAY); + btaor(tvn->iform,~(BT848_IFORM_NORM|BT848_IFORM_XTBOTH), BT848_IFORM); + btwrite(tvn->vbipack, BT848_VBI_PACK_SIZE); + btwrite(1, BT848_VBI_PACK_DEL); + + btv->pll.pll_ofreq = tvn->Fsc; + if (no_irq_context) + set_pll(btv); + + btv->win.interlace = (btv->win.height>tvn->sheight/2) ? 1 : 0; + + if (0 == btv->risc_cap_odd && + 0 == btv->risc_cap_even) { + /* overlay only */ + owidth = btv->win.width; + oheight = btv->win.height; + ewidth = btv->win.width; + eheight = btv->win.height; + format = btv->win.color_fmt; + bswap = btv->fb_color_ctl; + } else if (-1 != btv->gq_grab && + 0 == btv->risc_cap_odd && + !btv->win.interlace && + btv->scr_on) { + /* odd field -> overlay, even field -> capture */ + owidth = btv->win.width; + oheight = btv->win.height; + ewidth = btv->gbuf[btv->gq_grab].width; + eheight = btv->gbuf[btv->gq_grab].height; + format = (btv->win.color_fmt & 0xf0) | + (btv->gbuf[btv->gq_grab].fmt & 0x0f); + bswap = btv->fb_color_ctl & 0x0a; + } else { + /* capture only */ + owidth = btv->gbuf[btv->gq_grab].width; + oheight = btv->gbuf[btv->gq_grab].height; + ewidth = btv->gbuf[btv->gq_grab].width; + eheight = btv->gbuf[btv->gq_grab].height; + format = btv->gbuf[btv->gq_grab].fmt; + bswap = 0; + } + + /* program odd + even fields */ + bt848_set_eogeo(btv, tvn, 1, owidth, oheight); + bt848_set_eogeo(btv, tvn, 0, ewidth, eheight); + + btwrite(format, BT848_COLOR_FMT); + btwrite(bswap | BT848_COLOR_CTL_GAMMA, BT848_COLOR_CTL); +} + + +static int bpp2fmt[4] = { + BT848_COLOR_FMT_RGB8, BT848_COLOR_FMT_RGB16, + BT848_COLOR_FMT_RGB24, BT848_COLOR_FMT_RGB32 +}; + +static void bt848_set_winsize(struct bttv *btv) +{ + unsigned short format; + + if (btv->picture.palette > 0 && btv->picture.palette <= VIDEO_PALETTE_YUV422) { + /* format set by VIDIOCSPICT */ + format = palette2fmt[btv->picture.palette]; + } else { + /* use default for the given color depth */ + format = (btv->win.depth==15) ? BT848_COLOR_FMT_RGB15 : + bpp2fmt[(btv->win.bpp-1)&3]; + } + btv->win.color_fmt = format; + if (bigendian && + format == BT848_COLOR_FMT_RGB32) { + btv->fb_color_ctl = + BT848_COLOR_CTL_WSWAP_ODD | + BT848_COLOR_CTL_WSWAP_EVEN | + BT848_COLOR_CTL_BSWAP_ODD | + BT848_COLOR_CTL_BSWAP_EVEN; + } else if (bigendian && + (format == BT848_COLOR_FMT_RGB16 || + format == BT848_COLOR_FMT_RGB15)) { + btv->fb_color_ctl = + BT848_COLOR_CTL_BSWAP_ODD | + BT848_COLOR_CTL_BSWAP_EVEN; + } else { + btv->fb_color_ctl = 0; + } + + /* RGB8 seems to be a 9x5x5 GRB color cube starting at + * color 16. Why the h... can't they even mention this in the + * data sheet? [AC - because it's a standard format so I guess + * it never occurred to them] + * Enable dithering in this mode. + */ + + if (format==BT848_COLOR_FMT_RGB8) + btand(~BT848_CAP_CTL_DITH_FRAME, BT848_CAP_CTL); + else + btor(BT848_CAP_CTL_DITH_FRAME, BT848_CAP_CTL); + + bt848_set_geo(btv,1); +} + +/* + * Grab into virtual memory. + */ + +static int vgrab(struct bttv *btv, struct video_mmap *mp) +{ + unsigned int *ro, *re; + unsigned int *vbuf; + unsigned long flags; + + if(btv->fbuffer==NULL) + { + if(fbuffer_alloc(btv)) + return -ENOBUFS; + } + + if(mp->frame >= gbuffers || mp->frame < 0) + return -EINVAL; + if(btv->gbuf[mp->frame].stat != GBUFFER_UNUSED) + return -EBUSY; + + if(mp->height < 32 || mp->width < 48) + return -EINVAL; + if (mp->format >= PALETTEFMT_MAX) + return -EINVAL; + + if (mp->height*mp->width*fmtbppx2[palette2fmt[mp->format]&0x0f]/2 + > gbufsize) + return -EINVAL; + if(-1 == palette2fmt[mp->format]) + return -EINVAL; + + /* + * Ok load up the BT848 + */ + + vbuf=(unsigned int *)(btv->fbuffer+gbufsize*mp->frame); + ro=btv->gbuf[mp->frame].risc; + re=ro+2048; + make_vrisctab(btv, ro, re, vbuf, mp->width, mp->height, mp->format); + + if (bttv_debug) + printk("bttv%d: cap vgrab: queue %d (%d:%dx%d)\n", + btv->nr,mp->frame,mp->format,mp->width,mp->height); + spin_lock_irqsave(&btv->s_lock, flags); + btv->gbuf[mp->frame].stat = GBUFFER_GRABBING; + btv->gbuf[mp->frame].fmt = palette2fmt[mp->format]; + btv->gbuf[mp->frame].width = mp->width; + btv->gbuf[mp->frame].height = mp->height; + btv->gbuf[mp->frame].ro = virt_to_bus(ro); + btv->gbuf[mp->frame].re = virt_to_bus(re); + +#if 1 + if (mp->height <= tvnorms[btv->win.norm].sheight/2 && + mp->format != VIDEO_PALETTE_RAW) + btv->gbuf[mp->frame].ro = 0; +#endif + + if (-1 == btv->gq_grab && btv->gq_in == btv->gq_out) { + btv->gq_start = 1; + btv->risc_jmp[12]=cpu_to_le32(BT848_RISC_JUMP|(0x8<<16)|BT848_RISC_IRQ); + } + btv->gqueue[btv->gq_in++] = mp->frame; + btv->gq_in = btv->gq_in % MAX_GBUFFERS; + + btor(3, BT848_CAP_CTL); + btor(3, BT848_GPIO_DMA_CTL); + spin_unlock_irqrestore(&btv->s_lock, flags); + return 0; +} + +static long bttv_write(struct video_device *v, const char *buf, unsigned long count, int nonblock) +{ + return -EINVAL; +} + +static long bttv_read(struct video_device *v, char *buf, unsigned long count, int nonblock) +{ + struct bttv *btv= (struct bttv *)v; + int q,todo; + DECLARE_WAITQUEUE(wait, current); + + /* BROKEN: RETURNS VBI WHEN IT SHOULD RETURN GRABBED VIDEO FRAME */ + todo=count; + while (todo && todo>(q=VBIBUF_SIZE-btv->vbip)) + { + if(copy_to_user((void *) buf, (void *) btv->vbibuf+btv->vbip, q)) + return -EFAULT; + todo-=q; + buf+=q; + + add_wait_queue(&btv->vbiq, &wait); + current->state = TASK_INTERRUPTIBLE; + if (todo && q==VBIBUF_SIZE-btv->vbip) + { + if(nonblock) + { + remove_wait_queue(&btv->vbiq, &wait); + current->state = TASK_RUNNING; + if(count==todo) + return -EWOULDBLOCK; + return count-todo; + } + schedule(); + if(signal_pending(current)) + { + remove_wait_queue(&btv->vbiq, &wait); + current->state = TASK_RUNNING; + + if(todo==count) + return -EINTR; + else + return count-todo; + } + } + remove_wait_queue(&btv->vbiq, &wait); + current->state = TASK_RUNNING; + } + if (todo) + { + if(copy_to_user((void *) buf, (void *) btv->vbibuf+btv->vbip, todo)) + return -EFAULT; + btv->vbip+=todo; + } + return count; +} + +static inline void burst(int on) +{ + tvnorms[0].scaledtwidth = 1135 - (on?BURSTOFFSET-2:0); + tvnorms[0].hdelayx1 = 186 - (on?BURSTOFFSET :0); + tvnorms[2].scaledtwidth = 1135 - (on?BURSTOFFSET-2:0); + tvnorms[2].hdelayx1 = 186 - (on?BURSTOFFSET :0); +} + +/* + * called from irq handler on fatal errors. Takes the grabber chip + * offline, flag it needs a reinitialization (which can't be done + * from irq context) and wake up all sleeping proccesses. They would + * block forever else. We also need someone who actually does the + * reinitialization from process context... + */ +static void bt848_offline(struct bttv *btv) +{ + int i; + spin_lock(&btv->s_lock); + + /* cancel all outstanding grab requests */ + btv->gq_in = 0; + btv->gq_out = 0; + btv->gq_grab = -1; + for (i = 0; i < gbuffers; i++) + if (btv->gbuf[i].stat == GBUFFER_GRABBING) + btv->gbuf[i].stat = GBUFFER_ERROR; + + /* disable screen overlay and DMA */ + btv->risc_cap_odd = 0; + btv->risc_cap_even = 0; + bt848_set_risc_jmps(btv,0); + + /* flag the chip needs a restart */ + btv->needs_restart = 1; + spin_unlock(&btv->s_lock); + + wake_up_interruptible(&btv->vbiq); + wake_up_interruptible(&btv->capq); +} + +static void bt848_restart(struct bttv *btv) +{ + unsigned long irq_flags; + + if (bttv_verbose) + printk("bttv%d: resetting chip\n",btv->nr); + btwrite(0xfffffUL, BT848_INT_STAT); + btand(~15, BT848_GPIO_DMA_CTL); + btwrite(0, BT848_SRESET); + btwrite(virt_to_bus(btv->risc_jmp+2), + BT848_RISC_STRT_ADD); + + /* enforce pll reprogramming */ + btv->pll.pll_current = 0; + set_pll(btv); + + spin_lock_irqsave(&btv->s_lock, irq_flags); + btv->errors = 0; + btv->needs_restart = 0; + bt848_set_geo(btv,0); + bt848_set_risc_jmps(btv,-1); + spin_unlock_irqrestore(&btv->s_lock, irq_flags); +} + +/* + * Open a bttv card. Right now the flags stuff is just playing + */ + +static int bttv_open(struct video_device *dev, int flags) +{ + struct bttv *btv = (struct bttv *)dev; + int i,ret; + + ret = -EBUSY; + if (bttv_debug) + printk("bttv%d: open called\n",btv->nr); + + down(&btv->lock); + if (btv->user) + goto out_unlock; + + btv->fbuffer=(unsigned char *) rvmalloc(gbuffers*gbufsize); + ret = -ENOMEM; + if (!btv->fbuffer) + goto out_unlock; + + btv->gq_in = 0; + btv->gq_out = 0; + btv->gq_grab = -1; + for (i = 0; i < gbuffers; i++) + btv->gbuf[i].stat = GBUFFER_UNUSED; + + if (btv->needs_restart) + bt848_restart(btv); + burst(0); + set_pll(btv); + btv->user++; + up(&btv->lock); + return 0; + + out_unlock: + up(&btv->lock); + return ret; +} + +static void bttv_close(struct video_device *dev) +{ + struct bttv *btv=(struct bttv *)dev; + unsigned long irq_flags; + int need_wait; + + down(&btv->lock); + btv->user--; + spin_lock_irqsave(&btv->s_lock, irq_flags); + need_wait = (-1 != btv->gq_grab); + btv->gq_start = 0; + btv->gq_in = 0; + btv->gq_out = 0; + btv->gq_grab = -1; + btv->scr_on = 0; + btv->risc_cap_odd = 0; + btv->risc_cap_even = 0; + bt848_set_risc_jmps(btv,-1); + spin_unlock_irqrestore(&btv->s_lock, irq_flags); + + /* + * A word of warning. At this point the chip + * is still capturing because its FIFO hasn't emptied + * and the DMA control operations are posted PCI + * operations. + */ + + btread(BT848_I2C); /* This fixes the PCI posting delay */ + + if (need_wait) { + /* + * This is sucky but right now I can't find a good way to + * be sure its safe to free the buffer. We wait 5-6 fields + * which is more than sufficient to be sure. + */ + current->state = TASK_UNINTERRUPTIBLE; + schedule_timeout(HZ/10); /* Wait 1/10th of a second */ + } + + /* + * We have allowed it to drain. + */ + + if(btv->fbuffer) + rvfree((void *) btv->fbuffer, gbuffers*gbufsize); + btv->fbuffer=0; + up(&btv->lock); +} + + +/***********************************/ +/* ioctls and supporting functions */ +/***********************************/ + +extern inline void bt848_bright(struct bttv *btv, uint bright) +{ + btwrite(bright&0xff, BT848_BRIGHT); +} + +extern inline void bt848_hue(struct bttv *btv, uint hue) +{ + btwrite(hue&0xff, BT848_HUE); +} + +extern inline void bt848_contrast(struct bttv *btv, uint cont) +{ + unsigned int conthi; + + conthi=(cont>>6)&4; + btwrite(cont&0xff, BT848_CONTRAST_LO); + btaor(conthi, ~4, BT848_E_CONTROL); + btaor(conthi, ~4, BT848_O_CONTROL); +} + +extern inline void bt848_sat_u(struct bttv *btv, unsigned long data) +{ + u32 datahi; + + datahi=(data>>7)&2; + btwrite(data&0xff, BT848_SAT_U_LO); + btaor(datahi, ~2, BT848_E_CONTROL); + btaor(datahi, ~2, BT848_O_CONTROL); +} + +static inline void bt848_sat_v(struct bttv *btv, unsigned long data) +{ + u32 datahi; + + datahi=(data>>8)&1; + btwrite(data&0xff, BT848_SAT_V_LO); + btaor(datahi, ~1, BT848_E_CONTROL); + btaor(datahi, ~1, BT848_O_CONTROL); +} + +/* + * ioctl routine + */ + + +static int bttv_ioctl(struct video_device *dev, unsigned int cmd, void *arg) +{ + struct bttv *btv=(struct bttv *)dev; + unsigned long irq_flags; + int i,ret = 0; + + if (bttv_debug > 1) + printk("bttv%d: ioctl 0x%x\n",btv->nr,cmd); + + switch (cmd) { + case VIDIOCGCAP: + { + struct video_capability b; + strcpy(b.name,btv->video_dev.name); + b.type = VID_TYPE_CAPTURE| + ((bttv_tvcards[btv->type].tuner != -1) ? VID_TYPE_TUNER : 0) | + VID_TYPE_OVERLAY| + VID_TYPE_CLIPPING| + VID_TYPE_FRAMERAM| + VID_TYPE_SCALES; + b.channels = bttv_tvcards[btv->type].video_inputs; + b.audios = bttv_tvcards[btv->type].audio_inputs; + b.maxwidth = tvnorms[btv->win.norm].swidth; + b.maxheight = tvnorms[btv->win.norm].sheight; + b.minwidth = 48; + b.minheight = 32; + if(copy_to_user(arg,&b,sizeof(b))) + return -EFAULT; + return 0; + } + case VIDIOCGCHAN: + { + struct video_channel v; + if(copy_from_user(&v, arg,sizeof(v))) + return -EFAULT; + v.flags=VIDEO_VC_AUDIO; + v.tuners=0; + v.type=VIDEO_TYPE_CAMERA; + v.norm = btv->win.norm; + if (v.channel>=bttv_tvcards[btv->type].video_inputs) + return -EINVAL; + if(v.channel==bttv_tvcards[btv->type].tuner) + { + strcpy(v.name,"Television"); + v.flags|=VIDEO_VC_TUNER; + v.type=VIDEO_TYPE_TV; + v.tuners=1; + } + else if(v.channel==bttv_tvcards[btv->type].svhs) + strcpy(v.name,"S-Video"); + else + sprintf(v.name,"Composite%d",v.channel); + + if(copy_to_user(arg,&v,sizeof(v))) + return -EFAULT; + return 0; + } + /* + * Each channel has 1 tuner + */ + case VIDIOCSCHAN: + { + struct video_channel v; + if(copy_from_user(&v, arg,sizeof(v))) + return -EFAULT; + + if (v.channel>bttv_tvcards[btv->type].video_inputs) + return -EINVAL; + if (v.norm > (sizeof(tvnorms)/sizeof(*tvnorms))) + return -EOPNOTSUPP; + + bttv_call_i2c_clients(btv,cmd,&v); + down(&btv->lock); + bt848_muxsel(btv, v.channel); + btv->channel=v.channel; + if (btv->win.norm != v.norm) { + btv->win.norm = v.norm; + make_vbitab(btv); + spin_lock_irqsave(&btv->s_lock, irq_flags); + bt848_set_winsize(btv); + spin_unlock_irqrestore(&btv->s_lock, irq_flags); + } + up(&btv->lock); + return 0; + } + case VIDIOCGTUNER: + { + struct video_tuner v; + if(copy_from_user(&v,arg,sizeof(v))!=0) + return -EFAULT; +#if 0 /* tuner.signal might be of intrest for non-tuner sources too ... */ + if(v.tuner||btv->channel) /* Only tuner 0 */ + return -EINVAL; +#endif + strcpy(v.name, "Television"); + v.rangelow=0; + v.rangehigh=0xFFFFFFFF; + v.flags=VIDEO_TUNER_PAL|VIDEO_TUNER_NTSC|VIDEO_TUNER_SECAM; + v.mode = btv->win.norm; + v.signal = (btread(BT848_DSTATUS)&BT848_DSTATUS_HLOC) ? 0xFFFF : 0; + bttv_call_i2c_clients(btv,cmd,&v); + if(copy_to_user(arg,&v,sizeof(v))) + return -EFAULT; + return 0; + } + /* We have but one tuner */ + case VIDIOCSTUNER: + { + struct video_tuner v; + if(copy_from_user(&v, arg, sizeof(v))) + return -EFAULT; + /* Only one channel has a tuner */ + if(v.tuner!=bttv_tvcards[btv->type].tuner) + return -EINVAL; + + if(v.mode!=VIDEO_MODE_PAL&&v.mode!=VIDEO_MODE_NTSC + &&v.mode!=VIDEO_MODE_SECAM) + return -EOPNOTSUPP; + bttv_call_i2c_clients(btv,cmd,&v); + if (btv->win.norm != v.mode) { + btv->win.norm = v.mode; + down(&btv->lock); + set_pll(btv); + make_vbitab(btv); + spin_lock_irqsave(&btv->s_lock, irq_flags); + bt848_set_winsize(btv); + spin_unlock_irqrestore(&btv->s_lock, irq_flags); + up(&btv->lock); + } + return 0; + } + case VIDIOCGPICT: + { + struct video_picture p=btv->picture; + if(copy_to_user(arg, &p, sizeof(p))) + return -EFAULT; + return 0; + } + case VIDIOCSPICT: + { + struct video_picture p; + if(copy_from_user(&p, arg,sizeof(p))) + return -EFAULT; + if (p.palette > PALETTEFMT_MAX) + return -EINVAL; + down(&btv->lock); + /* We want -128 to 127 we get 0-65535 */ + bt848_bright(btv, (p.brightness>>8)-128); + /* 0-511 for the colour */ + bt848_sat_u(btv, p.colour>>7); + bt848_sat_v(btv, ((p.colour>>7)*201L)/237); + /* -128 to 127 */ + bt848_hue(btv, (p.hue>>8)-128); + /* 0-511 */ + bt848_contrast(btv, p.contrast>>7); + btv->picture = p; + up(&btv->lock); + return 0; + } + case VIDIOCSWIN: + { + struct video_window vw; + struct video_clip *vcp = NULL; + + if(copy_from_user(&vw,arg,sizeof(vw))) + return -EFAULT; + + down(&btv->lock); + if(vw.flags || vw.width < 16 || vw.height < 16) + { + spin_lock_irqsave(&btv->s_lock, irq_flags); + btv->scr_on = 0; + bt848_set_risc_jmps(btv,-1); + spin_unlock_irqrestore(&btv->s_lock, irq_flags); + up(&btv->lock); + return -EINVAL; + } + if (btv->win.bpp < 4) + { /* adjust and align writes */ + vw.x = (vw.x + 3) & ~3; + vw.width &= ~3; + } + if (btv->needs_restart) + bt848_restart(btv); + btv->win.x=vw.x; + btv->win.y=vw.y; + btv->win.width=vw.width; + btv->win.height=vw.height; + + spin_lock_irqsave(&btv->s_lock, irq_flags); + bt848_set_risc_jmps(btv,0); + bt848_set_winsize(btv); + spin_unlock_irqrestore(&btv->s_lock, irq_flags); + + /* + * Do any clips. + */ + if(vw.clipcount<0) { + if((vcp=vmalloc(VIDEO_CLIPMAP_SIZE))==NULL) { + up(&btv->lock); + return -ENOMEM; + } + if(copy_from_user(vcp, vw.clips, + VIDEO_CLIPMAP_SIZE)) { + up(&btv->lock); + vfree(vcp); + return -EFAULT; + } + } else if (vw.clipcount > 2048) { + up(&btv->lock); + return -EINVAL; + } else if (vw.clipcount) { + if((vcp=vmalloc(sizeof(struct video_clip)* + (vw.clipcount))) == NULL) { + up(&btv->lock); + return -ENOMEM; + } + if(copy_from_user(vcp,vw.clips, + sizeof(struct video_clip)* + vw.clipcount)) { + up(&btv->lock); + vfree(vcp); + return -EFAULT; + } + } + make_clip_tab(btv, vcp, vw.clipcount); + if (vw.clipcount != 0) + vfree(vcp); + spin_lock_irqsave(&btv->s_lock, irq_flags); + bt848_set_risc_jmps(btv,-1); + spin_unlock_irqrestore(&btv->s_lock, irq_flags); + up(&btv->lock); + return 0; + } + case VIDIOCGWIN: + { + struct video_window vw; + memset(&vw,0,sizeof(vw)); + vw.x=btv->win.x; + vw.y=btv->win.y; + vw.width=btv->win.width; + vw.height=btv->win.height; + if(btv->win.interlace) + vw.flags|=VIDEO_WINDOW_INTERLACE; + if(copy_to_user(arg,&vw,sizeof(vw))) + return -EFAULT; + return 0; + } + case VIDIOCCAPTURE: + { + int v; + if(copy_from_user(&v, arg,sizeof(v))) + return -EFAULT; + if(btv->win.vidadr == 0) + return -EINVAL; + if (btv->win.width==0 || btv->win.height==0) + return -EINVAL; + if (1 == no_overlay) + return -EIO; + spin_lock_irqsave(&btv->s_lock, irq_flags); + if (v == 1 && btv->win.vidadr != 0) + btv->scr_on = 1; + if (v == 0) + btv->scr_on = 0; + bt848_set_risc_jmps(btv,-1); + spin_unlock_irqrestore(&btv->s_lock, irq_flags); + return 0; + } + case VIDIOCGFBUF: + { + struct video_buffer v; + v.base=(void *)btv->win.vidadr; + v.height=btv->win.sheight; + v.width=btv->win.swidth; + v.depth=btv->win.depth; + v.bytesperline=btv->win.bpl; + if(copy_to_user(arg, &v,sizeof(v))) + return -EFAULT; + return 0; + + } + case VIDIOCSFBUF: + { + struct video_buffer v; + if(!capable(CAP_SYS_ADMIN) && + !capable(CAP_SYS_RAWIO)) + return -EPERM; + if(copy_from_user(&v, arg,sizeof(v))) + return -EFAULT; + if(v.depth!=8 && v.depth!=15 && v.depth!=16 && + v.depth!=24 && v.depth!=32 && v.width > 16 && + v.height > 16 && v.bytesperline > 16) + return -EINVAL; + down(&btv->lock); + if (v.base) + btv->win.vidadr=(unsigned long)v.base; + btv->win.sheight=v.height; + btv->win.swidth=v.width; + btv->win.bpp=((v.depth+7)&0x38)/8; + btv->win.depth=v.depth; + btv->win.bpl=v.bytesperline; + +#if 0 /* was broken for ages and nobody noticed. Looks like we don't need + it any more as everybody explicitly sets the palette using VIDIOCSPICT + these days */ + /* set sefault color format */ + switch (v.depth) { + case 8: btv->picture.palette = VIDEO_PALETTE_HI240; break; + case 15: btv->picture.palette = VIDEO_PALETTE_RGB555; break; + case 16: btv->picture.palette = VIDEO_PALETTE_RGB565; break; + case 24: btv->picture.palette = VIDEO_PALETTE_RGB24; break; + case 32: btv->picture.palette = VIDEO_PALETTE_RGB32; break; + } +#endif + + if (bttv_debug) + printk("Display at %p is %d by %d, bytedepth %d, bpl %d\n", + v.base, v.width,v.height, btv->win.bpp, btv->win.bpl); + spin_lock_irqsave(&btv->s_lock, irq_flags); + bt848_set_winsize(btv); + spin_unlock_irqrestore(&btv->s_lock, irq_flags); + up(&btv->lock); + return 0; + } + case VIDIOCKEY: + { + /* Will be handled higher up .. */ + return 0; + } + case VIDIOCGFREQ: + { + unsigned long v=btv->win.freq; + if(copy_to_user(arg,&v,sizeof(v))) + return -EFAULT; + return 0; + } + case VIDIOCSFREQ: + { + unsigned long v; + if(copy_from_user(&v, arg, sizeof(v))) + return -EFAULT; + btv->win.freq=v; + bttv_call_i2c_clients(btv,cmd,&v); +#if 1 + if (btv->radio && btv->has_matchbox) + tea5757_set_freq(btv,v); +#endif + return 0; + } + + case VIDIOCGAUDIO: + { + struct video_audio v; + + v=btv->audio_dev; + v.flags&=~(VIDEO_AUDIO_MUTE|VIDEO_AUDIO_MUTABLE); + v.flags|=VIDEO_AUDIO_MUTABLE; + strcpy(v.name,"TV"); + + v.mode = VIDEO_SOUND_MONO; + bttv_call_i2c_clients(btv,cmd,&v); + + /* card specific hooks */ + if (bttv_tvcards[btv->type].audio_hook) + bttv_tvcards[btv->type].audio_hook(btv,&v,0); + + if(copy_to_user(arg,&v,sizeof(v))) + return -EFAULT; + return 0; + } + case VIDIOCSAUDIO: + { + struct video_audio v; + + if(copy_from_user(&v,arg, sizeof(v))) + return -EFAULT; + down(&btv->lock); + if(v.flags&VIDEO_AUDIO_MUTE) + audio(btv, AUDIO_MUTE, 1); + /* One audio source per tuner -- huh? */ + if(v.audio<0 || v.audio >= bttv_tvcards[btv->type].audio_inputs) { + up(&btv->lock); + return -EINVAL; + } + /* bt848_muxsel(btv,v.audio); */ + if(!(v.flags&VIDEO_AUDIO_MUTE)) + audio(btv, AUDIO_UNMUTE, 1); + + bttv_call_i2c_clients(btv,cmd,&v); + + /* card specific hooks */ + if (bttv_tvcards[btv->type].audio_hook) + bttv_tvcards[btv->type].audio_hook(btv,&v,1); + + btv->audio_dev=v; + up(&btv->lock); + return 0; + } + + case VIDIOCSYNC: + { + DECLARE_WAITQUEUE(wait, current); + + if(copy_from_user((void *)&i,arg,sizeof(int))) + return -EFAULT; + if (i < 0 || i >= gbuffers) + return -EINVAL; + switch (btv->gbuf[i].stat) { + case GBUFFER_UNUSED: + ret = -EINVAL; + break; + case GBUFFER_GRABBING: + add_wait_queue(&btv->capq, &wait); + current->state = TASK_INTERRUPTIBLE; + while(btv->gbuf[i].stat==GBUFFER_GRABBING) { + if (bttv_debug) + printk("bttv%d: cap sync: sleep on %d\n",btv->nr,i); + schedule(); + if(signal_pending(current)) { + remove_wait_queue(&btv->capq, &wait); + current->state = TASK_RUNNING; + return -EINTR; + } + } + remove_wait_queue(&btv->capq, &wait); + current->state = TASK_RUNNING; + /* fall throuth */ + case GBUFFER_DONE: + case GBUFFER_ERROR: + ret = (btv->gbuf[i].stat == GBUFFER_ERROR) ? -EIO : 0; + if (bttv_debug) + printk("bttv%d: cap sync: buffer %d, retval %d\n",btv->nr,i,ret); + btv->gbuf[i].stat = GBUFFER_UNUSED; + } + if (btv->needs_restart) { + down(&btv->lock); + bt848_restart(btv); + up(&btv->lock); + } + return ret; + } + + case BTTV_FIELDNR: + if(copy_to_user((void *) arg, (void *) &btv->last_field, + sizeof(btv->last_field))) + return -EFAULT; + break; + + case BTTV_PLLSET: { + struct bttv_pll_info p; + if(!capable(CAP_SYS_ADMIN)) + return -EPERM; + if(copy_from_user(&p , (void *) arg, sizeof(btv->pll))) + return -EFAULT; + down(&btv->lock); + btv->pll.pll_ifreq = p.pll_ifreq; + btv->pll.pll_ofreq = p.pll_ofreq; + btv->pll.pll_crystal = p.pll_crystal; + up(&btv->lock); + break; + } + + case VIDIOCMCAPTURE: + { + struct video_mmap vm; + int ret; + if(copy_from_user((void *) &vm, (void *) arg, sizeof(vm))) + return -EFAULT; + down(&btv->lock); + ret = vgrab(btv, &vm); + up(&btv->lock); + return ret; + } + + case VIDIOCGMBUF: + { + struct video_mbuf vm; + memset(&vm, 0 , sizeof(vm)); + vm.size=gbufsize*gbuffers; + vm.frames=gbuffers; + for (i = 0; i < gbuffers; i++) + vm.offsets[i]=i*gbufsize; + if(copy_to_user((void *)arg, (void *)&vm, sizeof(vm))) + return -EFAULT; + return 0; + } + + case VIDIOCGUNIT: + { + struct video_unit vu; + vu.video=btv->video_dev.minor; + vu.vbi=btv->vbi_dev.minor; + if(btv->radio_dev.minor!=-1) + vu.radio=btv->radio_dev.minor; + else + vu.radio=VIDEO_NO_UNIT; + vu.audio=VIDEO_NO_UNIT; + vu.teletext=VIDEO_NO_UNIT; + if(copy_to_user((void *)arg, (void *)&vu, sizeof(vu))) + return -EFAULT; + return 0; + } + + case BTTV_BURST_ON: + { + burst(1); + return 0; + } + + case BTTV_BURST_OFF: + { + burst(0); + return 0; + } + + case BTTV_VERSION: + { + return BTTV_VERSION_CODE; + } + + case BTTV_PICNR: + { + /* return picture;*/ + return 0; + } + + default: + return -ENOIOCTLCMD; + } + return 0; +} + +/* + * This maps the vmalloced and reserved fbuffer to user space. + * + * FIXME: + * - PAGE_READONLY should suffice!? + * - remap_page_range is kind of inefficient for page by page remapping. + * But e.g. pte_alloc() does not work in modules ... :-( + */ + +static int do_bttv_mmap(struct bttv *btv, const char *adr, unsigned long size) +{ + unsigned long start=(unsigned long) adr; + unsigned long page,pos; + + if (size>gbuffers*gbufsize) + return -EINVAL; + if (!btv->fbuffer) { + if(fbuffer_alloc(btv)) + return -EINVAL; + } + pos=(unsigned long) btv->fbuffer; + while (size > 0) { + page = kvirt_to_pa(pos); + if (remap_page_range(start, page, PAGE_SIZE, PAGE_SHARED)) + return -EAGAIN; + start+=PAGE_SIZE; + pos+=PAGE_SIZE; + size-=PAGE_SIZE; + } + return 0; +} + +static int bttv_mmap(struct video_device *dev, const char *adr, unsigned long size) +{ + struct bttv *btv=(struct bttv *)dev; + int r; + + down(&btv->lock); + r=do_bttv_mmap(btv, adr, size); + up(&btv->lock); + return r; +} + + +static struct video_device bttv_template= +{ + owner: THIS_MODULE, + name: "UNSET", + type: VID_TYPE_TUNER|VID_TYPE_CAPTURE|VID_TYPE_OVERLAY|VID_TYPE_TELETEXT, + hardware: VID_HARDWARE_BT848, + open: bttv_open, + close: bttv_close, + read: bttv_read, + write: bttv_write, + ioctl: bttv_ioctl, + mmap: bttv_mmap, + minor: -1, +}; + + +static long vbi_read(struct video_device *v, char *buf, unsigned long count, + int nonblock) +{ + struct bttv *btv=(struct bttv *)(v-2); + int q,todo; + DECLARE_WAITQUEUE(wait, current); + + todo=count; + while (todo && todo>(q=VBIBUF_SIZE-btv->vbip)) + { + if (btv->needs_restart) { + down(&btv->lock); + bt848_restart(btv); + up(&btv->lock); + } + if(copy_to_user((void *) buf, (void *) btv->vbibuf+btv->vbip, q)) + return -EFAULT; + todo-=q; + buf+=q; + + add_wait_queue(&btv->vbiq, &wait); + current->state = TASK_INTERRUPTIBLE; + if (todo && q==VBIBUF_SIZE-btv->vbip) + { + if(nonblock) + { + remove_wait_queue(&btv->vbiq, &wait); + current->state = TASK_RUNNING; + if(count==todo) + return -EWOULDBLOCK; + return count-todo; + } + schedule(); + if(signal_pending(current)) + { + remove_wait_queue(&btv->vbiq, &wait); + current->state = TASK_RUNNING; + if(todo==count) + return -EINTR; + else + return count-todo; + } + } + remove_wait_queue(&btv->vbiq, &wait); + current->state = TASK_RUNNING; + } + if (todo) + { + if(copy_to_user((void *) buf, (void *) btv->vbibuf+btv->vbip, todo)) + return -EFAULT; + btv->vbip+=todo; + } + return count; +} + +static unsigned int vbi_poll(struct video_device *dev, struct file *file, + poll_table *wait) +{ + struct bttv *btv=(struct bttv *)(dev-2); + unsigned int mask = 0; + + poll_wait(file, &btv->vbiq, wait); + + if (btv->vbip < VBIBUF_SIZE) + mask |= (POLLIN | POLLRDNORM); + + return mask; +} + +static int vbi_open(struct video_device *dev, int flags) +{ + struct bttv *btv=(struct bttv *)(dev-2); + unsigned long irq_flags; + + down(&btv->lock); + if (btv->needs_restart) + bt848_restart(btv); + set_pll(btv); + btv->vbip=VBIBUF_SIZE; + spin_lock_irqsave(&btv->s_lock, irq_flags); + btv->vbi_on = 1; + bt848_set_risc_jmps(btv,-1); + spin_unlock_irqrestore(&btv->s_lock, irq_flags); + up(&btv->lock); + + return 0; +} + +static void vbi_close(struct video_device *dev) +{ + struct bttv *btv=(struct bttv *)(dev-2); + unsigned long irq_flags; + + spin_lock_irqsave(&btv->s_lock, irq_flags); + btv->vbi_on = 0; + bt848_set_risc_jmps(btv,-1); + spin_unlock_irqrestore(&btv->s_lock, irq_flags); +} + +static int vbi_ioctl(struct video_device *dev, unsigned int cmd, void *arg) +{ + struct bttv *btv=(struct bttv *)(dev-2); + + switch (cmd) { + case VIDIOCGCAP: + { + struct video_capability b; + strcpy(b.name,btv->vbi_dev.name); + b.type = ((bttv_tvcards[btv->type].tuner != -1) ? VID_TYPE_TUNER : 0) | + VID_TYPE_TELETEXT; + b.channels = 0; + b.audios = 0; + b.maxwidth = 0; + b.maxheight = 0; + b.minwidth = 0; + b.minheight = 0; + if(copy_to_user(arg,&b,sizeof(b))) + return -EFAULT; + return 0; + } + case VIDIOCGFREQ: + case VIDIOCSFREQ: + case VIDIOCGTUNER: + case VIDIOCSTUNER: + case VIDIOCGCHAN: + case VIDIOCSCHAN: + case BTTV_VERSION: + return bttv_ioctl(dev-2,cmd,arg); + case BTTV_VBISIZE: + /* make alevt happy :-) */ + return VBIBUF_SIZE; + default: + return -EINVAL; + } +} + +static struct video_device vbi_template= +{ + owner: THIS_MODULE, + name: "bttv vbi", + type: VID_TYPE_CAPTURE|VID_TYPE_TELETEXT, + hardware: VID_HARDWARE_BT848, + open: vbi_open, + close: vbi_close, + read: vbi_read, + write: bttv_write, + poll: vbi_poll, + ioctl: vbi_ioctl, + minor: -1, +}; + + +static int radio_open(struct video_device *dev, int flags) +{ + struct bttv *btv = (struct bttv *)(dev-1); + unsigned long v; + + down(&btv->lock); + if (btv->user) + goto busy_unlock; + btv->user++; + + btv->radio = 1; + v = 400*16; + bttv_call_i2c_clients(btv,VIDIOCSFREQ,&v); + bttv_call_i2c_clients(btv,AUDC_SET_RADIO,&btv->tuner_type); + bt848_muxsel(btv,0); + up(&btv->lock); + + return 0; + + busy_unlock: + up(&btv->lock); + return -EBUSY; +} + +static void radio_close(struct video_device *dev) +{ + struct bttv *btv=(struct bttv *)(dev-1); + + down(&btv->lock); + btv->user--; + btv->radio = 0; + up(&btv->lock); +} + +static long radio_read(struct video_device *v, char *buf, unsigned long count, int nonblock) +{ + return -EINVAL; +} + +static int radio_ioctl(struct video_device *dev, unsigned int cmd, void *arg) +{ + struct bttv *btv=(struct bttv *)(dev-1); + switch (cmd) { + case VIDIOCGCAP: + { + struct video_capability v; + strcpy(v.name,btv->video_dev.name); + v.type = VID_TYPE_TUNER; + v.channels = 1; + v.audios = 1; + /* No we don't do pictures */ + v.maxwidth = 0; + v.maxheight = 0; + v.minwidth = 0; + v.minheight = 0; + if (copy_to_user(arg, &v, sizeof(v))) + return -EFAULT; + return 0; + break; + } + case VIDIOCGTUNER: + { + struct video_tuner v; + if(copy_from_user(&v,arg,sizeof(v))!=0) + return -EFAULT; + if(v.tuner||btv->channel) /* Only tuner 0 */ + return -EINVAL; + strcpy(v.name, "Radio"); + /* japan: 76.0 MHz - 89.9 MHz + western europe: 87.5 MHz - 108.0 MHz + russia: 65.0 MHz - 108.0 MHz */ + v.rangelow=(int)(65*16); + v.rangehigh=(int)(108*16); + v.flags= 0; /* XXX */ + v.mode = 0; /* XXX */ + bttv_call_i2c_clients(btv,cmd,&v); + if(copy_to_user(arg,&v,sizeof(v))) + return -EFAULT; + return 0; + } + case VIDIOCSTUNER: + { + struct video_tuner v; + if(copy_from_user(&v, arg, sizeof(v))) + return -EFAULT; + /* Only channel 0 has a tuner */ + if(v.tuner!=0 || btv->channel) + return -EINVAL; + /* XXX anything to do ??? */ + return 0; + } + case VIDIOCGFREQ: + case VIDIOCSFREQ: + case VIDIOCGAUDIO: + case VIDIOCSAUDIO: + bttv_ioctl((struct video_device *)btv,cmd,arg); + break; + default: + return -ENOIOCTLCMD; + } + return 0; +} + +static struct video_device radio_template= +{ + owner: THIS_MODULE, + name: "bttv radio", + type: VID_TYPE_TUNER, + hardware: VID_HARDWARE_BT848, + open: radio_open, + close: radio_close, + read: radio_read, /* just returns -EINVAL */ + write: bttv_write, /* just returns -EINVAL */ + ioctl: radio_ioctl, + minor: -1, +}; + + +static void bt848_set_risc_jmps(struct bttv *btv, int flags) +{ + if (-1 == flags) { + /* defaults */ + flags = 0; + if (btv->scr_on) + flags |= 0x03; + if (btv->vbi_on) + flags |= 0x0c; + } + + if (bttv_debug > 1) + printk("bttv%d: set_risc_jmp %08lx:", + btv->nr,virt_to_bus(btv->risc_jmp)); + + /* Sync to start of odd field */ + btv->risc_jmp[0]=cpu_to_le32(BT848_RISC_SYNC|BT848_RISC_RESYNC + |BT848_FIFO_STATUS_VRE); + btv->risc_jmp[1]=cpu_to_le32(0); + + /* Jump to odd vbi sub */ + btv->risc_jmp[2]=cpu_to_le32(BT848_RISC_JUMP|(0xd<<20)); + if (flags&8) { + if (bttv_debug > 1) + printk(" ev=%08lx",virt_to_bus(btv->vbi_odd)); + btv->risc_jmp[3]=cpu_to_le32(virt_to_bus(btv->vbi_odd)); + } else { + if (bttv_debug > 1) + printk(" -----------"); + btv->risc_jmp[3]=cpu_to_le32(virt_to_bus(btv->risc_jmp+4)); + } + + /* Jump to odd sub */ + btv->risc_jmp[4]=cpu_to_le32(BT848_RISC_JUMP|(0xe<<20)); + if (0 != btv->risc_cap_odd) { + if (bttv_debug > 1) + printk(" e%d=%08x",btv->gq_grab,btv->risc_cap_odd); + flags |= 3; + btv->risc_jmp[5]=cpu_to_le32(btv->risc_cap_odd); + } else if ((flags&2) && + (!btv->win.interlace || 0 == btv->risc_cap_even)) { + if (bttv_debug > 1) + printk(" eo=%08lx",virt_to_bus(btv->risc_scr_odd)); + btv->risc_jmp[5]=cpu_to_le32(virt_to_bus(btv->risc_scr_odd)); + } else { + if (bttv_debug > 1) + printk(" -----------"); + btv->risc_jmp[5]=cpu_to_le32(virt_to_bus(btv->risc_jmp+6)); + } + + + /* Sync to start of even field */ + btv->risc_jmp[6]=cpu_to_le32(BT848_RISC_SYNC|BT848_RISC_RESYNC + |BT848_FIFO_STATUS_VRO); + btv->risc_jmp[7]=cpu_to_le32(0); + + /* Jump to even vbi sub */ + btv->risc_jmp[8]=cpu_to_le32(BT848_RISC_JUMP); + if (flags&4) { + if (bttv_debug > 1) + printk(" ov=%08lx",virt_to_bus(btv->vbi_even)); + btv->risc_jmp[9]=cpu_to_le32(virt_to_bus(btv->vbi_even)); + } else { + if (bttv_debug > 1) + printk(" -----------"); + btv->risc_jmp[9]=cpu_to_le32(virt_to_bus(btv->risc_jmp+10)); + } + + /* Jump to even sub */ + btv->risc_jmp[10]=cpu_to_le32(BT848_RISC_JUMP|(8<<20)); + if (0 != btv->risc_cap_even) { + if (bttv_debug > 1) + printk(" o%d=%08x",btv->gq_grab,btv->risc_cap_even); + flags |= 3; + btv->risc_jmp[11]=cpu_to_le32(btv->risc_cap_even); + } else if ((flags&1) && + btv->win.interlace) { + if (bttv_debug > 1) + printk(" oo=%08lx",virt_to_bus(btv->risc_scr_even)); + btv->risc_jmp[11]=cpu_to_le32(virt_to_bus(btv->risc_scr_even)); + } else { + if (bttv_debug > 1) + printk(" -----------"); + btv->risc_jmp[11]=cpu_to_le32(virt_to_bus(btv->risc_jmp+12)); + } + + if (btv->gq_start) { + btv->risc_jmp[12]=cpu_to_le32(BT848_RISC_JUMP|(0x8<<16)|BT848_RISC_IRQ); + } else { + btv->risc_jmp[12]=cpu_to_le32(BT848_RISC_JUMP); + } + btv->risc_jmp[13]=cpu_to_le32(virt_to_bus(btv->risc_jmp)); + + /* enable cpaturing and DMA */ + if (bttv_debug > 1) + printk(" flags=0x%x dma=%s\n", + flags,(flags&0x0f) ? "on" : "off"); + btaor(flags, ~0x0f, BT848_CAP_CTL); + if (flags&0x0f) + bt848_dma(btv, 3); + else + bt848_dma(btv, 0); +} + +# define do_video_register(dev,type,nr) video_register_device(dev,type,nr) + +static int __devinit init_video_dev(struct bttv *btv) +{ + audio(btv, AUDIO_MUTE, 1); + + if(do_video_register(&btv->video_dev,VFL_TYPE_GRABBER,video_nr)<0) + return -1; + if(do_video_register(&btv->vbi_dev,VFL_TYPE_VBI,vbi_nr)<0) + { + video_unregister_device(&btv->video_dev); + return -1; + } + if (btv->has_radio) + { + if(do_video_register(&btv->radio_dev, VFL_TYPE_RADIO, radio_nr)<0) + { + video_unregister_device(&btv->vbi_dev); + video_unregister_device(&btv->video_dev); + return -1; + } + } + return 1; +} + +static int __devinit init_bt848(struct bttv *btv) +{ + int j; + unsigned long irq_flags; + + btv->user=0; + init_MUTEX(&btv->lock); + + /* dump current state of the gpio registers before changing them, + * might help to make a new card work */ + if (bttv_gpio) + bttv_gpio_tracking(btv,"init #1"); + + /* reset the bt848 */ + btwrite(0, BT848_SRESET); + DEBUG(printk(KERN_DEBUG "bttv%d: bt848_mem: 0x%lx\n", btv->nr, (unsigned long) btv->bt848_mem)); + + /* not registered yet */ + btv->video_dev.minor = -1; + btv->radio_dev.minor = -1; + btv->vbi_dev.minor = -1; + + /* default setup for max. PAL size in a 1024xXXX hicolor framebuffer */ + btv->win.norm=0; /* change this to 1 for NTSC, 2 for SECAM */ + btv->win.interlace=1; + btv->win.x=0; + btv->win.y=0; + btv->win.width=320; + btv->win.height=240; + btv->win.bpp=2; + btv->win.depth=16; + btv->win.color_fmt=BT848_COLOR_FMT_RGB16; + btv->win.bpl=1024*btv->win.bpp; + btv->win.swidth=1024; + btv->win.sheight=768; + btv->win.vidadr=0; + btv->vbi_on=0; + btv->scr_on=0; + + btv->risc_scr_odd=0; + btv->risc_scr_even=0; + btv->risc_cap_odd=0; + btv->risc_cap_even=0; + btv->risc_jmp=0; + btv->vbibuf=0; + btv->field=btv->last_field=0; + + btv->errors=0; + btv->needs_restart=0; + btv->has_radio=radio[btv->nr]; + + if (!(btv->risc_scr_odd=(unsigned int *) kmalloc(RISCMEM_LEN/2, GFP_KERNEL))) + return -1; + if (!(btv->risc_scr_even=(unsigned int *) kmalloc(RISCMEM_LEN/2, GFP_KERNEL))) + return -1; + if (!(btv->risc_jmp =(unsigned int *) kmalloc(2048, GFP_KERNEL))) + return -1; + btv->vbi_odd=btv->risc_jmp+16; + btv->vbi_even=btv->vbi_odd+256; + btv->bus_vbi_odd=virt_to_bus(btv->risc_jmp+12); + btv->bus_vbi_even=virt_to_bus(btv->risc_jmp+6); + + btwrite(virt_to_bus(btv->risc_jmp+2), BT848_RISC_STRT_ADD); + btv->vbibuf=(unsigned char *) vmalloc_32(VBIBUF_SIZE); + if (!btv->vbibuf) + return -1; + if (!(btv->gbuf = kmalloc(sizeof(struct bttv_gbuf)*gbuffers,GFP_KERNEL))) + return -1; + for (j = 0; j < gbuffers; j++) { + if (!(btv->gbuf[j].risc = kmalloc(16384,GFP_KERNEL))) + return -1; + } + + memset(btv->vbibuf, 0, VBIBUF_SIZE); /* We don't want to return random + memory to the user */ + + btv->fbuffer=NULL; + +/* btwrite(0, BT848_TDEC); */ + btwrite(0x10, BT848_COLOR_CTL); + btwrite(0x00, BT848_CAP_CTL); + /* set planar and packed mode trigger points and */ + /* set rising edge of inverted GPINTR pin as irq trigger */ + btwrite(BT848_GPIO_DMA_CTL_PKTP_32| + BT848_GPIO_DMA_CTL_PLTP1_16| + BT848_GPIO_DMA_CTL_PLTP23_16| + BT848_GPIO_DMA_CTL_GPINTC| + BT848_GPIO_DMA_CTL_GPINTI, + BT848_GPIO_DMA_CTL); + + /* select direct input */ + btwrite(0x00, BT848_GPIO_REG_INP); + btwrite(0x00, BT848_GPIO_OUT_EN); + if (bttv_gpio) + bttv_gpio_tracking(btv,"init #2"); + + btwrite(BT848_IFORM_MUX1 | BT848_IFORM_XTAUTO | BT848_IFORM_AUTO, + BT848_IFORM); + + btwrite(0xd8, BT848_CONTRAST_LO); + bt848_bright(btv, 0x10); + + btwrite(0x20, BT848_E_VSCALE_HI); + btwrite(0x20, BT848_O_VSCALE_HI); + btwrite(/*BT848_ADC_SYNC_T|*/ + BT848_ADC_RESERVED|BT848_ADC_CRUSH, BT848_ADC); + + if (lumafilter) { + btwrite(0, BT848_E_CONTROL); + btwrite(0, BT848_O_CONTROL); + } else { + btwrite(BT848_CONTROL_LDEC, BT848_E_CONTROL); + btwrite(BT848_CONTROL_LDEC, BT848_O_CONTROL); + } + + btv->picture.colour=254<<7; + btv->picture.brightness=128<<8; + btv->picture.hue=128<<8; + btv->picture.contrast=0xd8<<7; + + btwrite(0x00, BT848_E_SCLOOP); + btwrite(0x00, BT848_O_SCLOOP); + + /* clear interrupt status */ + btwrite(0xfffffUL, BT848_INT_STAT); + + /* set interrupt mask */ + btwrite(btv->triton1| + /*BT848_INT_PABORT|BT848_INT_RIPERR|BT848_INT_PPERR| + BT848_INT_FDSR|BT848_INT_FTRGT|BT848_INT_FBUS|*/ + (fieldnr ? BT848_INT_VSYNC : 0)| + BT848_INT_GPINT| + BT848_INT_SCERR| + BT848_INT_RISCI|BT848_INT_OCERR|BT848_INT_VPRES| + BT848_INT_FMTCHG|BT848_INT_HLOCK, + BT848_INT_MASK); + + bt848_muxsel(btv, 1); + bt848_set_winsize(btv); + make_vbitab(btv); + spin_lock_irqsave(&btv->s_lock, irq_flags); + bt848_set_risc_jmps(btv,-1); + spin_unlock_irqrestore(&btv->s_lock, irq_flags); + + /* needs to be done before i2c is registered */ + if (btv->type == BTTV_HAUPPAUGE || btv->type == BTTV_HAUPPAUGE878) + bttv_hauppauge_boot_msp34xx(btv); + + /* register i2c */ + btv->tuner_type=-1; + init_bttv_i2c(btv); + + /* some card-specific stuff (needs working i2c) */ + bttv_init_card(btv); + + /* + * Now add the template and register the device unit. + */ + init_video_dev(btv); + + return 0; +} + +/* ----------------------------------------------------------------------- */ + +static char *irq_name[] = { "FMTCHG", "VSYNC", "HSYNC", "OFLOW", "HLOCK", + "VPRES", "6", "7", "I2CDONE", "GPINT", "10", + "RISCI", "FBUS", "FTRGT", "FDSR", "PPERR", + "RIPERR", "PABORT", "OCERR", "SCERR" }; + +static void bttv_irq(int irq, void *dev_id, struct pt_regs * regs) +{ + u32 stat,astat; + u32 dstat; + int count; + struct bttv *btv; + + btv=(struct bttv *)dev_id; + count=0; + while (1) + { + /* get/clear interrupt status bits */ + stat=btread(BT848_INT_STAT); + astat=stat&btread(BT848_INT_MASK); + if (!astat) + return; + btwrite(stat,BT848_INT_STAT); + + /* get device status bits */ + dstat=btread(BT848_DSTATUS); + + if (irq_debug) { + int i; + printk(KERN_DEBUG "bttv%d: irq loop=%d risc=%x, bits:", + btv->nr, count, stat>>28); + for (i = 0; i < (sizeof(irq_name)/sizeof(char*)); i++) { + if (stat & (1 << i)) + printk(" %s",irq_name[i]); + if (astat & (1 << i)) + printk("*"); + } + if (stat & BT848_INT_HLOCK) + printk(" HLOC => %s", (dstat & BT848_DSTATUS_HLOC) + ? "yes" : "no"); + if (stat & BT848_INT_VPRES) + printk(" PRES => %s", (dstat & BT848_DSTATUS_PRES) + ? "yes" : "no"); + if (stat & BT848_INT_FMTCHG) + printk(" NUML => %s", (dstat & BT848_DSTATUS_PRES) + ? "625" : "525"); + printk("\n"); + } + + if (astat&BT848_INT_GPINT) + wake_up_interruptible(&btv->gpioq); + + if (astat&BT848_INT_VSYNC) + btv->field++; + + if (astat&(BT848_INT_SCERR|BT848_INT_OCERR)) { + if (bttv_verbose) + printk("bttv%d: irq:%s%s risc_count=%08x\n", + btv->nr, + (astat&BT848_INT_SCERR) ? " SCERR" : "", + (astat&BT848_INT_OCERR) ? " OCERR" : "", + btread(BT848_RISC_COUNT)); + btv->errors++; + if (btv->errors < BTTV_ERRORS) { + spin_lock(&btv->s_lock); + btand(~15, BT848_GPIO_DMA_CTL); + btwrite(virt_to_bus(btv->risc_jmp+2), + BT848_RISC_STRT_ADD); + bt848_set_geo(btv,0); + bt848_set_risc_jmps(btv,-1); + spin_unlock(&btv->s_lock); + } else { + if (bttv_verbose) + printk("bttv%d: aiee: error loops\n",btv->nr); + bt848_offline(btv); + } + } + if (astat&BT848_INT_RISCI) + { + if (bttv_debug > 1) + printk("bttv%d: IRQ_RISCI\n",btv->nr); + + /* captured VBI frame */ + if (stat&(1<<28)) + { + btv->vbip=0; + /* inc vbi frame count for detecting drops */ + (*(u32 *)&(btv->vbibuf[VBIBUF_SIZE - 4]))++; + wake_up_interruptible(&btv->vbiq); + } + + /* captured full frame */ + if (stat&(2<<28) && btv->gq_grab != -1) + { + btv->last_field=btv->field; + if (bttv_debug) + printk("bttv%d: cap irq: done %d\n",btv->nr,btv->gq_grab); + do_gettimeofday(&btv->gbuf[btv->gq_grab].tv); + spin_lock(&btv->s_lock); + btv->gbuf[btv->gq_grab].stat = GBUFFER_DONE; + btv->gq_grab = -1; + if (btv->gq_in != btv->gq_out) + { + btv->gq_grab = btv->gqueue[btv->gq_out++]; + btv->gq_out = btv->gq_out % MAX_GBUFFERS; + if (bttv_debug) + printk("bttv%d: cap irq: capture %d\n",btv->nr,btv->gq_grab); + btv->risc_cap_odd = btv->gbuf[btv->gq_grab].ro; + btv->risc_cap_even = btv->gbuf[btv->gq_grab].re; + bt848_set_risc_jmps(btv,-1); + bt848_set_geo(btv,0); + btwrite(BT848_COLOR_CTL_GAMMA, + BT848_COLOR_CTL); + } else { + btv->risc_cap_odd = 0; + btv->risc_cap_even = 0; + bt848_set_risc_jmps(btv,-1); + bt848_set_geo(btv,0); + btwrite(btv->fb_color_ctl | BT848_COLOR_CTL_GAMMA, + BT848_COLOR_CTL); + } + spin_unlock(&btv->s_lock); + wake_up_interruptible(&btv->capq); + break; + } + if (stat&(8<<28) && btv->gq_start) + { + spin_lock(&btv->s_lock); + btv->gq_start = 0; + btv->gq_grab = btv->gqueue[btv->gq_out++]; + btv->gq_out = btv->gq_out % MAX_GBUFFERS; + if (bttv_debug) + printk("bttv%d: cap irq: capture %d [start]\n",btv->nr,btv->gq_grab); + btv->risc_cap_odd = btv->gbuf[btv->gq_grab].ro; + btv->risc_cap_even = btv->gbuf[btv->gq_grab].re; + bt848_set_risc_jmps(btv,-1); + bt848_set_geo(btv,0); + btwrite(BT848_COLOR_CTL_GAMMA, + BT848_COLOR_CTL); + spin_unlock(&btv->s_lock); + } + } + + if (astat&BT848_INT_HLOCK) { + if ((dstat&BT848_DSTATUS_HLOC) || (btv->radio)) + audio(btv, AUDIO_ON,0); + else + audio(btv, AUDIO_OFF,0); + } + + count++; + if (count > 20) { + btwrite(0, BT848_INT_MASK); + printk(KERN_ERR + "bttv%d: IRQ lockup, cleared int mask\n", btv->nr); + bt848_offline(btv); + } + } +} + + + +/* + * Scan for a Bt848 card, request the irq and map the io memory + */ + +static void __devexit bttv_remove(struct pci_dev *pci_dev) +{ + u8 command; + int j; + struct bttv *btv = pci_get_drvdata(pci_dev); + + if (bttv_verbose) + printk("bttv%d: unloading\n",btv->nr); + + /* unregister i2c_bus */ + if (0 == btv->i2c_rc) + i2c_bit_del_bus(&btv->i2c_adap); + + /* turn off all capturing, DMA and IRQs */ + btand(~15, BT848_GPIO_DMA_CTL); + + /* first disable interrupts before unmapping the memory! */ + btwrite(0, BT848_INT_MASK); + btwrite(~0x0UL,BT848_INT_STAT); + btwrite(0x0, BT848_GPIO_OUT_EN); + if (bttv_gpio) + bttv_gpio_tracking(btv,"cleanup"); + + /* disable PCI bus-mastering */ + pci_read_config_byte(btv->dev, PCI_COMMAND, &command); + command &= ~PCI_COMMAND_MASTER; + pci_write_config_byte(btv->dev, PCI_COMMAND, command); + + /* unmap and free memory */ + for (j = 0; j < gbuffers; j++) + if (btv->gbuf[j].risc) + kfree(btv->gbuf[j].risc); + if (btv->gbuf) + kfree((void *) btv->gbuf); + + if (btv->risc_scr_odd) + kfree((void *) btv->risc_scr_odd); + + if (btv->risc_scr_even) + kfree((void *) btv->risc_scr_even); + + DEBUG(printk(KERN_DEBUG "free: risc_jmp: 0x%p.\n", btv->risc_jmp)); + if (btv->risc_jmp) + kfree((void *) btv->risc_jmp); + + DEBUG(printk(KERN_DEBUG "bt848_vbibuf: 0x%p.\n", btv->vbibuf)); + if (btv->vbibuf) + vfree((void *) btv->vbibuf); + + free_irq(btv->irq,btv); + DEBUG(printk(KERN_DEBUG "bt848_mem: 0x%p.\n", btv->bt848_mem)); + if (btv->bt848_mem) + iounmap(btv->bt848_mem); + + if (btv->video_dev.minor!=-1) + video_unregister_device(&btv->video_dev); + if (btv->vbi_dev.minor!=-1) + video_unregister_device(&btv->vbi_dev); + if (btv->radio_dev.minor != -1) + video_unregister_device(&btv->radio_dev); + + release_mem_region(pci_resource_start(btv->dev,0), + pci_resource_len(btv->dev,0)); + /* wake up any waiting processes + because shutdown flag is set, no new processes (in this queue) + are expected + */ + btv->shutdown=1; + wake_up(&btv->gpioq); + + pci_set_drvdata(pci_dev, NULL); + return; +} + + +static int __devinit bttv_probe(struct pci_dev *dev, const struct pci_device_id *pci_id) +{ + int result; + unsigned char lat; + struct bttv *btv; +#if defined(__powerpc__) + unsigned int cmd; +#endif + + printk(KERN_INFO "bttv: Bt8xx card found (%d).\n", bttv_num); + + btv=&bttvs[bttv_num]; + btv->dev=dev; + btv->nr = bttv_num; + btv->bt848_mem=NULL; + btv->vbibuf=NULL; + btv->risc_jmp=NULL; + btv->vbi_odd=NULL; + btv->vbi_even=NULL; + init_waitqueue_head(&btv->vbiq); + init_waitqueue_head(&btv->capq); + btv->vbip=VBIBUF_SIZE; + btv->s_lock = SPIN_LOCK_UNLOCKED; + init_waitqueue_head(&btv->gpioq); + btv->shutdown=0; + + memcpy(&btv->video_dev,&bttv_template, sizeof(bttv_template)); + memcpy(&btv->vbi_dev,&vbi_template, sizeof(vbi_template)); + memcpy(&btv->radio_dev,&radio_template,sizeof(radio_template)); + + btv->id=dev->device; + btv->irq=dev->irq; + btv->bt848_adr=pci_resource_start(dev,0); + if (pci_enable_device(dev)) + return -EIO; + if (!request_mem_region(pci_resource_start(dev,0), + pci_resource_len(dev,0), + "bttv")) { + return -EBUSY; + } + if (btv->id >= 878) + btv->i2c_command = 0x83; + else + btv->i2c_command=(I2C_TIMING | BT848_I2C_SCL | BT848_I2C_SDA); + + pci_read_config_byte(dev, PCI_CLASS_REVISION, &btv->revision); + pci_read_config_byte(dev, PCI_LATENCY_TIMER, &lat); + printk(KERN_INFO "bttv%d: Bt%d (rev %d) at %02x:%02x.%x, ", + bttv_num,btv->id, btv->revision, dev->bus->number, + PCI_SLOT(dev->devfn),PCI_FUNC(dev->devfn)); + printk("irq: %d, latency: %d, memory: 0x%lx\n", + btv->irq, lat, btv->bt848_adr); + + bttv_idcard(btv); + +#if defined(__powerpc__) + /* on OpenFirmware machines (PowerMac at least), PCI memory cycle */ + /* response on cards with no firmware is not enabled by OF */ + pci_read_config_dword(dev, PCI_COMMAND, &cmd); + cmd = (cmd | PCI_COMMAND_MEMORY ); + pci_write_config_dword(dev, PCI_COMMAND, cmd); +#endif + +#ifdef __sparc__ + btv->bt848_mem=(unsigned char *)btv->bt848_adr; +#else + btv->bt848_mem=ioremap(btv->bt848_adr, 0x1000); +#endif + + /* clear interrupt mask */ + btwrite(0, BT848_INT_MASK); + + result = request_irq(btv->irq, bttv_irq, + SA_SHIRQ | SA_INTERRUPT,"bttv",(void *)btv); + if (result==-EINVAL) + { + printk(KERN_ERR "bttv%d: Bad irq number or handler\n", + bttv_num); + goto fail1; + } + if (result==-EBUSY) + { + printk(KERN_ERR "bttv%d: IRQ %d busy, change your PnP config in BIOS\n",bttv_num,btv->irq); + goto fail1; + } + if (result < 0) + goto fail1; + + if (0 != bttv_handle_chipset(btv)) { + result = -1; + goto fail2; + } + + pci_set_master(dev); + pci_set_drvdata(dev,btv); + + if(init_bt848(btv) < 0) { + bttv_remove(dev); + return -EIO; + } + bttv_num++; + + return 0; + + fail2: + free_irq(btv->irq,btv); + fail1: + release_mem_region(pci_resource_start(btv->dev,0), + pci_resource_len(btv->dev,0)); + return result; +} + +static struct pci_device_id bttv_pci_tbl[] __devinitdata = { + {PCI_VENDOR_ID_BROOKTREE, PCI_DEVICE_ID_BT848, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, + {PCI_VENDOR_ID_BROOKTREE, PCI_DEVICE_ID_BT849, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, + {PCI_VENDOR_ID_BROOKTREE, PCI_DEVICE_ID_BT878, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, + {PCI_VENDOR_ID_BROOKTREE, PCI_DEVICE_ID_BT879, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, + {0,} +}; + +MODULE_DEVICE_TABLE(pci, bttv_pci_tbl); + +static struct pci_driver bttv_pci_driver = { + name: "bttv", + id_table: bttv_pci_tbl, + probe: bttv_probe, + remove: bttv_remove, +}; + +int bttv_init_module(void) +{ + bttv_num = 0; + + printk(KERN_INFO "bttv: driver version %d.%d.%d loaded\n", + (BTTV_VERSION_CODE >> 16) & 0xff, + (BTTV_VERSION_CODE >> 8) & 0xff, + BTTV_VERSION_CODE & 0xff); + if (gbuffers < 2 || gbuffers > MAX_GBUFFERS) + gbuffers = 2; + if (gbufsize < 0 || gbufsize > BTTV_MAX_FBUF) + gbufsize = BTTV_MAX_FBUF; + if (bttv_verbose) + printk(KERN_INFO "bttv: using %d buffers with %dk (%dk total) for capture\n", + gbuffers,gbufsize/1024,gbuffers*gbufsize/1024); + + bttv_check_chipset(); + + return pci_module_init(&bttv_pci_driver); +} + +void bttv_cleanup_module(void) +{ + pci_unregister_driver(&bttv_pci_driver); + return; +} + +module_init(bttv_init_module); +module_exit(bttv_cleanup_module); + +/* + * Local variables: + * c-basic-offset: 8 + * End: + */ diff -urpN linux-2.4.9-linus/drivers/media/video/cpia.c linux-2.4.9-larpage/drivers/media/video/cpia.c --- linux-2.4.9-linus/drivers/media/video/cpia.c 2001-05-19 17:43:06.000000000 -0700 +++ linux-2.4.9-larpage/drivers/media/video/cpia.c 2002-11-20 02:02:47.000000000 -0800 @@ -173,107 +173,58 @@ static u8 flicker_jumps[2][2][4] = /* forward declaration of local function */ static void reset_camera_struct(struct cam_data *cam); -/********************************************************************** - * - * Memory management - * - * This is a shameless copy from the USB-cpia driver (linux kernel - * version 2.3.29 or so, I have no idea what this code actually does ;). - * Actually it seems to be a copy of a shameless copy of the bttv-driver. - * Or that is a copy of a shameless copy of ... (To the powers: is there - * no generic kernel-function to do this sort of stuff?) - * - * Yes, it was a shameless copy from the bttv-driver. IIRC, Alan says - * there will be one, but apparentely not yet - jerdfelt - * - **********************************************************************/ - -/* Given PGD from the address space's page table, return the kernel - * virtual mapping of the physical memory mapped at ADR. - */ -static inline unsigned long uvirt_to_kva(pgd_t *pgd, unsigned long adr) -{ - unsigned long ret = 0UL; - pmd_t *pmd; - pte_t *ptep, pte; - - if (!pgd_none(*pgd)) { - pmd = pmd_offset(pgd, adr); - if (!pmd_none(*pmd)) { - ptep = pte_offset(pmd, adr); - pte = *ptep; - if (pte_present(pte)) { - ret = (unsigned long) page_address(pte_page(pte)); - ret |= (adr & (PAGE_SIZE-1)); - } - } - } - return ret; -} - -/* Here we want the physical address of the memory. - * This is used when initializing the contents of the - * area and marking the pages as reserved. - */ -static inline unsigned long kvirt_to_pa(unsigned long adr) -{ - unsigned long va, kva, ret; - - va = VMALLOC_VMADDR(adr); - kva = uvirt_to_kva(pgd_offset_k(va), va); - ret = __pa(kva); - return ret; -} +/**********************************************************/ +/* Memory management functions, copied from bttv-driver.c */ +/**********************************************************/ static void *rvmalloc(unsigned long size) { void *mem; - unsigned long adr, page; - - /* Round it off to PAGE_SIZE */ - size += (PAGE_SIZE - 1); - size &= ~(PAGE_SIZE - 1); mem = vmalloc_32(size); - if (!mem) - return NULL; - - memset(mem, 0, size); /* Clear the ram out, no junk to the user */ - adr = (unsigned long) mem; - while (size > 0) { - page = kvirt_to_pa(adr); - mem_map_reserve(virt_to_page(__va(page))); - adr += PAGE_SIZE; - if (size > PAGE_SIZE) - size -= PAGE_SIZE; - else - size = 0; + if (mem) { + /* no junk to the user */ + memset(mem, 0, PAGE_ALIGN(size)); + /* no need to reserve until rvmap_page_range */ } - return mem; } static void rvfree(void *mem, unsigned long size) { - unsigned long adr, page; + unsigned long vadr; - if (!mem) - return; + if (mem) { + vadr = (unsigned long) mem; + while ((long) size > 0) { + ClearPageReserved(vvirt_to_page(vadr)); + vadr += PAGE_SIZE; + size -= PAGE_SIZE; + } + vfree(mem); + } +} - size += (PAGE_SIZE - 1); - size &= ~(PAGE_SIZE - 1); +static inline int rvmap_page_range(const char *uadr, void *mem, + unsigned long size, pgprot_t prot) +{ + struct page *page; + unsigned long padr; + unsigned long unit = PAGE_SIZE; - adr = (unsigned long) mem; - while (size > 0) { - page = kvirt_to_pa(adr); - mem_map_unreserve(virt_to_page(__va(page))); - adr += PAGE_SIZE; - if (size > PAGE_SIZE) - size -= PAGE_SIZE; - else - size = 0; + while ((long) size > 0) { + if (unit > size) + unit = size; + page = vvirt_to_page((unsigned long)mem); + SetPageReserved(page); + padr = __pa(page_address(page)); + if (remap_page_range((unsigned long)uadr, padr, unit, prot)) + return -EAGAIN; + uadr += PAGE_SIZE; + mem += PAGE_SIZE; + size -= PAGE_SIZE; } - vfree(mem); + return 0; } /********************************************************************** @@ -2974,8 +2925,6 @@ static int cpia_ioctl(struct video_devic static int cpia_mmap(struct video_device *dev, const char *adr, unsigned long size) { - unsigned long start = (unsigned long)adr; - unsigned long page, pos; struct cam_data *cam = dev->priv; int retval; @@ -3001,25 +2950,12 @@ static int cpia_mmap(struct video_device } } - pos = (unsigned long)(cam->frame_buf); - while (size > 0) { - page = kvirt_to_pa(pos); - if (remap_page_range(start, page, PAGE_SIZE, PAGE_SHARED)) { - up(&cam->busy_lock); - return -EAGAIN; - } - start += PAGE_SIZE; - pos += PAGE_SIZE; - if (size > PAGE_SIZE) - size -= PAGE_SIZE; - else - size = 0; - } + retval = rvmap_page_range(adr, cam->frame_buf, size, PAGE_SHARED); - DBG("cpia_mmap: %ld\n", size); + DBG("cpia_mmap: %ld\n", retval); up(&cam->busy_lock); - return 0; + return retval; } int cpia_video_init(struct video_device *vdev) diff -urpN linux-2.4.9-linus/drivers/media/video/cpia.c.orig linux-2.4.9-larpage/drivers/media/video/cpia.c.orig --- linux-2.4.9-linus/drivers/media/video/cpia.c.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/drivers/media/video/cpia.c.orig 2002-11-20 02:02:47.000000000 -0800 @@ -0,0 +1,3270 @@ +/* + * cpia CPiA driver + * + * Supports CPiA based Video Camera's. + * + * (C) Copyright 1999-2000 Peter Pregler, + * (C) Copyright 1999-2000 Scott J. Bertin, + * (C) Copyright 1999-2000 Johannes Erdfelt, jerdfelt@valinux.com + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. + */ + +/* #define _CPIA_DEBUG_ define for verbose debug output */ +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#ifdef CONFIG_KMOD +#include +#endif + +#include "cpia.h" + +#ifdef CONFIG_VIDEO_CPIA_PP +extern int cpia_pp_init(void); +#endif +#ifdef CONFIG_VIDEO_CPIA_USB +extern int cpia_usb_init(void); +#endif + +static int video_nr = -1; + +#ifdef MODULE +MODULE_PARM(video_nr,"i"); +MODULE_AUTHOR("Scott J. Bertin & Peter Pregler & Johannes Erdfelt "); +MODULE_DESCRIPTION("V4L-driver for Vision CPiA based cameras"); +MODULE_SUPPORTED_DEVICE("video"); +#endif + +#define ABOUT "V4L-Driver for Vision CPiA based cameras" + +#ifndef VID_HARDWARE_CPIA +#define VID_HARDWARE_CPIA 24 /* FIXME -> from linux/videodev.h */ +#endif + +#define CPIA_MODULE_CPIA (0<<5) +#define CPIA_MODULE_SYSTEM (1<<5) +#define CPIA_MODULE_VP_CTRL (5<<5) +#define CPIA_MODULE_CAPTURE (6<<5) +#define CPIA_MODULE_DEBUG (7<<5) + +#define INPUT (DATA_IN << 8) +#define OUTPUT (DATA_OUT << 8) + +#define CPIA_COMMAND_GetCPIAVersion (INPUT | CPIA_MODULE_CPIA | 1) +#define CPIA_COMMAND_GetPnPID (INPUT | CPIA_MODULE_CPIA | 2) +#define CPIA_COMMAND_GetCameraStatus (INPUT | CPIA_MODULE_CPIA | 3) +#define CPIA_COMMAND_GotoHiPower (OUTPUT | CPIA_MODULE_CPIA | 4) +#define CPIA_COMMAND_GotoLoPower (OUTPUT | CPIA_MODULE_CPIA | 5) +#define CPIA_COMMAND_GotoSuspend (OUTPUT | CPIA_MODULE_CPIA | 7) +#define CPIA_COMMAND_GotoPassThrough (OUTPUT | CPIA_MODULE_CPIA | 8) +#define CPIA_COMMAND_ModifyCameraStatus (OUTPUT | CPIA_MODULE_CPIA | 10) + +#define CPIA_COMMAND_ReadVCRegs (INPUT | CPIA_MODULE_SYSTEM | 1) +#define CPIA_COMMAND_WriteVCReg (OUTPUT | CPIA_MODULE_SYSTEM | 2) +#define CPIA_COMMAND_ReadMCPorts (INPUT | CPIA_MODULE_SYSTEM | 3) +#define CPIA_COMMAND_WriteMCPort (OUTPUT | CPIA_MODULE_SYSTEM | 4) +#define CPIA_COMMAND_SetBaudRate (OUTPUT | CPIA_MODULE_SYSTEM | 5) +#define CPIA_COMMAND_SetECPTiming (OUTPUT | CPIA_MODULE_SYSTEM | 6) +#define CPIA_COMMAND_ReadIDATA (INPUT | CPIA_MODULE_SYSTEM | 7) +#define CPIA_COMMAND_WriteIDATA (OUTPUT | CPIA_MODULE_SYSTEM | 8) +#define CPIA_COMMAND_GenericCall (OUTPUT | CPIA_MODULE_SYSTEM | 9) +#define CPIA_COMMAND_I2CStart (OUTPUT | CPIA_MODULE_SYSTEM | 10) +#define CPIA_COMMAND_I2CStop (OUTPUT | CPIA_MODULE_SYSTEM | 11) +#define CPIA_COMMAND_I2CWrite (OUTPUT | CPIA_MODULE_SYSTEM | 12) +#define CPIA_COMMAND_I2CRead (INPUT | CPIA_MODULE_SYSTEM | 13) + +#define CPIA_COMMAND_GetVPVersion (INPUT | CPIA_MODULE_VP_CTRL | 1) +#define CPIA_COMMAND_SetColourParams (OUTPUT | CPIA_MODULE_VP_CTRL | 3) +#define CPIA_COMMAND_SetExposure (OUTPUT | CPIA_MODULE_VP_CTRL | 4) +#define CPIA_COMMAND_SetColourBalance (OUTPUT | CPIA_MODULE_VP_CTRL | 6) +#define CPIA_COMMAND_SetSensorFPS (OUTPUT | CPIA_MODULE_VP_CTRL | 7) +#define CPIA_COMMAND_SetVPDefaults (OUTPUT | CPIA_MODULE_VP_CTRL | 8) +#define CPIA_COMMAND_SetApcor (OUTPUT | CPIA_MODULE_VP_CTRL | 9) +#define CPIA_COMMAND_SetFlickerCtrl (OUTPUT | CPIA_MODULE_VP_CTRL | 10) +#define CPIA_COMMAND_SetVLOffset (OUTPUT | CPIA_MODULE_VP_CTRL | 11) +#define CPIA_COMMAND_GetColourParams (INPUT | CPIA_MODULE_VP_CTRL | 16) +#define CPIA_COMMAND_GetColourBalance (INPUT | CPIA_MODULE_VP_CTRL | 17) +#define CPIA_COMMAND_GetExposure (INPUT | CPIA_MODULE_VP_CTRL | 18) +#define CPIA_COMMAND_SetSensorMatrix (OUTPUT | CPIA_MODULE_VP_CTRL | 19) +#define CPIA_COMMAND_ColourBars (OUTPUT | CPIA_MODULE_VP_CTRL | 25) +#define CPIA_COMMAND_ReadVPRegs (INPUT | CPIA_MODULE_VP_CTRL | 30) +#define CPIA_COMMAND_WriteVPReg (OUTPUT | CPIA_MODULE_VP_CTRL | 31) + +#define CPIA_COMMAND_GrabFrame (OUTPUT | CPIA_MODULE_CAPTURE | 1) +#define CPIA_COMMAND_UploadFrame (OUTPUT | CPIA_MODULE_CAPTURE | 2) +#define CPIA_COMMAND_SetGrabMode (OUTPUT | CPIA_MODULE_CAPTURE | 3) +#define CPIA_COMMAND_InitStreamCap (OUTPUT | CPIA_MODULE_CAPTURE | 4) +#define CPIA_COMMAND_FiniStreamCap (OUTPUT | CPIA_MODULE_CAPTURE | 5) +#define CPIA_COMMAND_StartStreamCap (OUTPUT | CPIA_MODULE_CAPTURE | 6) +#define CPIA_COMMAND_EndStreamCap (OUTPUT | CPIA_MODULE_CAPTURE | 7) +#define CPIA_COMMAND_SetFormat (OUTPUT | CPIA_MODULE_CAPTURE | 8) +#define CPIA_COMMAND_SetROI (OUTPUT | CPIA_MODULE_CAPTURE | 9) +#define CPIA_COMMAND_SetCompression (OUTPUT | CPIA_MODULE_CAPTURE | 10) +#define CPIA_COMMAND_SetCompressionTarget (OUTPUT | CPIA_MODULE_CAPTURE | 11) +#define CPIA_COMMAND_SetYUVThresh (OUTPUT | CPIA_MODULE_CAPTURE | 12) +#define CPIA_COMMAND_SetCompressionParams (OUTPUT | CPIA_MODULE_CAPTURE | 13) +#define CPIA_COMMAND_DiscardFrame (OUTPUT | CPIA_MODULE_CAPTURE | 14) + +#define CPIA_COMMAND_OutputRS232 (OUTPUT | CPIA_MODULE_DEBUG | 1) +#define CPIA_COMMAND_AbortProcess (OUTPUT | CPIA_MODULE_DEBUG | 4) +#define CPIA_COMMAND_SetDramPage (OUTPUT | CPIA_MODULE_DEBUG | 5) +#define CPIA_COMMAND_StartDramUpload (OUTPUT | CPIA_MODULE_DEBUG | 6) +#define CPIA_COMMAND_StartDummyDtream (OUTPUT | CPIA_MODULE_DEBUG | 8) +#define CPIA_COMMAND_AbortStream (OUTPUT | CPIA_MODULE_DEBUG | 9) +#define CPIA_COMMAND_DownloadDRAM (OUTPUT | CPIA_MODULE_DEBUG | 10) + +enum { + FRAME_READY, /* Ready to grab into */ + FRAME_GRABBING, /* In the process of being grabbed into */ + FRAME_DONE, /* Finished grabbing, but not been synced yet */ + FRAME_UNUSED, /* Unused (no MCAPTURE) */ +}; + +#define COMMAND_NONE 0x0000 +#define COMMAND_SETCOMPRESSION 0x0001 +#define COMMAND_SETCOMPRESSIONTARGET 0x0002 +#define COMMAND_SETCOLOURPARAMS 0x0004 +#define COMMAND_SETFORMAT 0x0008 +#define COMMAND_PAUSE 0x0010 +#define COMMAND_RESUME 0x0020 +#define COMMAND_SETYUVTHRESH 0x0040 +#define COMMAND_SETECPTIMING 0x0080 +#define COMMAND_SETCOMPRESSIONPARAMS 0x0100 +#define COMMAND_SETEXPOSURE 0x0200 +#define COMMAND_SETCOLOURBALANCE 0x0400 +#define COMMAND_SETSENSORFPS 0x0800 +#define COMMAND_SETAPCOR 0x1000 +#define COMMAND_SETFLICKERCTRL 0x2000 +#define COMMAND_SETVLOFFSET 0x4000 + +/* Developer's Guide Table 5 p 3-34 + * indexed by [mains][sensorFps.baserate][sensorFps.divisor]*/ +static u8 flicker_jumps[2][2][4] = +{ { { 76, 38, 19, 9 }, { 92, 46, 23, 11 } }, + { { 64, 32, 16, 8 }, { 76, 38, 19, 9} } +}; + +/* forward declaration of local function */ +static void reset_camera_struct(struct cam_data *cam); + +/**********************************************************/ +/* Memory management functions, copied from bttv-driver.c */ +/**********************************************************/ + +static void *rvmalloc(unsigned long size) +{ + void *mem; + + mem = vmalloc_32(size); + if (mem) { + /* no junk to the user */ + memset(mem, 0, PAGE_ALIGN(size)); + /* no need to reserve until rvmap_page_range */ + } + return mem; +} + +static void rvfree(void *mem, unsigned long size) +{ + unsigned long vadr; + + if (mem) { + vadr = (unsigned long) mem; + while ((long) size > 0) { + ClearPageReserved(vvirt_to_page(vadr)); + vadr += PAGE_SIZE; + size -= PAGE_SIZE; + } + vfree(mem); + } +} + +static inline int rvmap_page_range(const char *uadr, void *mem, + unsigned long size, pgprot_t prot) +{ + struct page *page; + unsigned long padr; + unsigned long unit = PAGE_SIZE; + + while ((long) size > 0) { + if (unit > size) + unit = size; + page = vvirt_to_page((unsigned long)mem); + SetPageReserved(page); + padr = __pa(page_address(page)); + if (remap_page_range((unsigned long)uadr, padr, unit, prot)) + return -EAGAIN; + uadr += PAGE_SIZE; + mem += PAGE_SIZE; + size -= PAGE_SIZE; + } + return 0; +} + +/********************************************************************** + * + * /proc interface + * + **********************************************************************/ +#ifdef CONFIG_PROC_FS +static struct proc_dir_entry *cpia_proc_root=NULL; + +static int cpia_read_proc(char *page, char **start, off_t off, + int count, int *eof, void *data) +{ + char *out = page; + int len, tmp; + struct cam_data *cam = data; + char tmpstr[20]; + + /* IMPORTANT: This output MUST be kept under PAGE_SIZE + * or we need to get more sophisticated. */ + + out += sprintf(out, "read-only\n-----------------------\n"); + out += sprintf(out, "V4L Driver version: %d.%d.%d\n", + CPIA_MAJ_VER, CPIA_MIN_VER, CPIA_PATCH_VER); + out += sprintf(out, "CPIA Version: %d.%02d (%d.%d)\n", + cam->params.version.firmwareVersion, + cam->params.version.firmwareRevision, + cam->params.version.vcVersion, + cam->params.version.vcRevision); + out += sprintf(out, "CPIA PnP-ID: %04x:%04x:%04x\n", + cam->params.pnpID.vendor, cam->params.pnpID.product, + cam->params.pnpID.deviceRevision); + out += sprintf(out, "VP-Version: %d.%d %04x\n", + cam->params.vpVersion.vpVersion, + cam->params.vpVersion.vpRevision, + cam->params.vpVersion.cameraHeadID); + + out += sprintf(out, "system_state: %#04x\n", + cam->params.status.systemState); + out += sprintf(out, "grab_state: %#04x\n", + cam->params.status.grabState); + out += sprintf(out, "stream_state: %#04x\n", + cam->params.status.streamState); + out += sprintf(out, "fatal_error: %#04x\n", + cam->params.status.fatalError); + out += sprintf(out, "cmd_error: %#04x\n", + cam->params.status.cmdError); + out += sprintf(out, "debug_flags: %#04x\n", + cam->params.status.debugFlags); + out += sprintf(out, "vp_status: %#04x\n", + cam->params.status.vpStatus); + out += sprintf(out, "error_code: %#04x\n", + cam->params.status.errorCode); + out += sprintf(out, "video_size: %s\n", + cam->params.format.videoSize == VIDEOSIZE_CIF ? + "CIF " : "QCIF"); + out += sprintf(out, "sub_sample: %s\n", + cam->params.format.subSample == SUBSAMPLE_420 ? + "420" : "422"); + out += sprintf(out, "yuv_order: %s\n", + cam->params.format.yuvOrder == YUVORDER_YUYV ? + "YUYV" : "UYVY"); + out += sprintf(out, "roi: (%3d, %3d) to (%3d, %3d)\n", + cam->params.roi.colStart*8, + cam->params.roi.rowStart*4, + cam->params.roi.colEnd*8, + cam->params.roi.rowEnd*4); + out += sprintf(out, "actual_fps: %3d\n", cam->fps); + out += sprintf(out, "transfer_rate: %4dkB/s\n", + cam->transfer_rate); + + out += sprintf(out, "\nread-write\n"); + out += sprintf(out, "----------------------- current min" + " max default comment\n"); + out += sprintf(out, "brightness: %8d %8d %8d %8d\n", + cam->params.colourParams.brightness, 0, 100, 50); + if (cam->params.version.firmwareVersion == 1 && + cam->params.version.firmwareRevision == 2) + /* 1-02 firmware limits contrast to 80 */ + tmp = 80; + else + tmp = 96; + + out += sprintf(out, "contrast: %8d %8d %8d %8d" + " steps of 8\n", + cam->params.colourParams.contrast, 0, tmp, 48); + out += sprintf(out, "saturation: %8d %8d %8d %8d\n", + cam->params.colourParams.saturation, 0, 100, 50); + tmp = (25000+5000*cam->params.sensorFps.baserate)/ + (1<params.sensorFps.divisor); + out += sprintf(out, "sensor_fps: %4d.%03d %8d %8d %8d\n", + tmp/1000, tmp%1000, 3, 30, 15); + out += sprintf(out, "stream_start_line: %8d %8d %8d %8d\n", + 2*cam->params.streamStartLine, 0, + cam->params.format.videoSize == VIDEOSIZE_CIF ? 288:144, + cam->params.format.videoSize == VIDEOSIZE_CIF ? 240:120); + out += sprintf(out, "ecp_timing: %8s %8s %8s %8s\n", + cam->params.ecpTiming ? "slow" : "normal", "slow", + "normal", "normal"); + + if (cam->params.colourBalance.balanceModeIsAuto) { + sprintf(tmpstr, "auto"); + } else { + sprintf(tmpstr, "manual"); + } + out += sprintf(out, "color_balance_mode: %8s %8s %8s" + " %8s\n", tmpstr, "manual", "auto", "auto"); + out += sprintf(out, "red_gain: %8d %8d %8d %8d\n", + cam->params.colourBalance.redGain, 0, 212, 32); + out += sprintf(out, "green_gain: %8d %8d %8d %8d\n", + cam->params.colourBalance.greenGain, 0, 212, 6); + out += sprintf(out, "blue_gain: %8d %8d %8d %8d\n", + cam->params.colourBalance.blueGain, 0, 212, 92); + + if (cam->params.version.firmwareVersion == 1 && + cam->params.version.firmwareRevision == 2) + /* 1-02 firmware limits gain to 2 */ + sprintf(tmpstr, "%8d %8d", 1, 2); + else + sprintf(tmpstr, "1,2,4,8"); + + if (cam->params.exposure.gainMode == 0) + out += sprintf(out, "max_gain: unknown %18s" + " %8d\n", tmpstr, 2); + else + out += sprintf(out, "max_gain: %8d %18s %8d\n", + 1<<(cam->params.exposure.gainMode-1), tmpstr, 2); + + switch(cam->params.exposure.expMode) { + case 1: + case 3: + sprintf(tmpstr, "manual"); + break; + case 2: + sprintf(tmpstr, "auto"); + break; + default: + sprintf(tmpstr, "unknown"); + break; + } + out += sprintf(out, "exposure_mode: %8s %8s %8s" + " %8s\n", tmpstr, "manual", "auto", "auto"); + out += sprintf(out, "centre_weight: %8s %8s %8s %8s\n", + (2-cam->params.exposure.centreWeight) ? "on" : "off", + "off", "on", "on"); + out += sprintf(out, "gain: %8d %8d max_gain %8d 1,2,4,8 possible\n", + 1<params.exposure.gain, 1, 1); + if (cam->params.version.firmwareVersion == 1 && + cam->params.version.firmwareRevision == 2) + /* 1-02 firmware limits fineExp to 127 */ + tmp = 255; + else + tmp = 511; + + out += sprintf(out, "fine_exp: %8d %8d %8d %8d\n", + cam->params.exposure.fineExp*2, 0, tmp, 0); + if (cam->params.version.firmwareVersion == 1 && + cam->params.version.firmwareRevision == 2) + /* 1-02 firmware limits coarseExpHi to 0 */ + tmp = 255; + else + tmp = 65535; + + out += sprintf(out, "coarse_exp: %8d %8d %8d" + " %8d\n", cam->params.exposure.coarseExpLo+ + 256*cam->params.exposure.coarseExpHi, 0, tmp, 185); + out += sprintf(out, "red_comp: %8d %8d %8d %8d\n", + cam->params.exposure.redComp, 220, 255, 220); + out += sprintf(out, "green1_comp: %8d %8d %8d %8d\n", + cam->params.exposure.green1Comp, 214, 255, 214); + out += sprintf(out, "green2_comp: %8d %8d %8d %8d\n", + cam->params.exposure.green2Comp, 214, 255, 214); + out += sprintf(out, "blue_comp: %8d %8d %8d %8d\n", + cam->params.exposure.blueComp, 230, 255, 230); + + out += sprintf(out, "apcor_gain1: %#8x %#8x %#8x %#8x\n", + cam->params.apcor.gain1, 0, 0xff, 0x1c); + out += sprintf(out, "apcor_gain2: %#8x %#8x %#8x %#8x\n", + cam->params.apcor.gain2, 0, 0xff, 0x1a); + out += sprintf(out, "apcor_gain4: %#8x %#8x %#8x %#8x\n", + cam->params.apcor.gain4, 0, 0xff, 0x2d); + out += sprintf(out, "apcor_gain8: %#8x %#8x %#8x %#8x\n", + cam->params.apcor.gain8, 0, 0xff, 0x2a); + out += sprintf(out, "vl_offset_gain1: %8d %8d %8d %8d\n", + cam->params.vlOffset.gain1, 0, 255, 24); + out += sprintf(out, "vl_offset_gain2: %8d %8d %8d %8d\n", + cam->params.vlOffset.gain2, 0, 255, 28); + out += sprintf(out, "vl_offset_gain4: %8d %8d %8d %8d\n", + cam->params.vlOffset.gain4, 0, 255, 30); + out += sprintf(out, "vl_offset_gain8: %8d %8d %8d %8d\n", + cam->params.vlOffset.gain8, 0, 255, 30); + out += sprintf(out, "flicker_control: %8s %8s %8s %8s\n", + cam->params.flickerControl.flickerMode ? "on" : "off", + "off", "on", "off"); + out += sprintf(out, "mains_frequency: %8d %8d %8d %8d" + " only 50/60\n", + cam->mainsFreq ? 60 : 50, 50, 60, 50); + out += sprintf(out, "allowable_overexposure: %8d %8d %8d %8d\n", + cam->params.flickerControl.allowableOverExposure, 0, + 255, 0); + out += sprintf(out, "compression_mode: "); + switch(cam->params.compression.mode) { + case CPIA_COMPRESSION_NONE: + out += sprintf(out, "%8s", "none"); + break; + case CPIA_COMPRESSION_AUTO: + out += sprintf(out, "%8s", "auto"); + break; + case CPIA_COMPRESSION_MANUAL: + out += sprintf(out, "%8s", "manual"); + break; + default: + out += sprintf(out, "%8s", "unknown"); + break; + } + out += sprintf(out, " none,auto,manual auto\n"); + out += sprintf(out, "decimation_enable: %8s %8s %8s %8s\n", + cam->params.compression.decimation == + DECIMATION_ENAB ? "on":"off", "off", "off", + "off"); + out += sprintf(out, "compression_target: %9s %9s %9s %9s\n", + cam->params.compressionTarget.frTargeting == + CPIA_COMPRESSION_TARGET_FRAMERATE ? + "framerate":"quality", + "framerate", "quality", "quality"); + out += sprintf(out, "target_framerate: %8d %8d %8d %8d\n", + cam->params.compressionTarget.targetFR, 0, 30, 7); + out += sprintf(out, "target_quality: %8d %8d %8d %8d\n", + cam->params.compressionTarget.targetQ, 0, 255, 10); + out += sprintf(out, "y_threshold: %8d %8d %8d %8d\n", + cam->params.yuvThreshold.yThreshold, 0, 31, 15); + out += sprintf(out, "uv_threshold: %8d %8d %8d %8d\n", + cam->params.yuvThreshold.uvThreshold, 0, 31, 15); + out += sprintf(out, "hysteresis: %8d %8d %8d %8d\n", + cam->params.compressionParams.hysteresis, 0, 255, 3); + out += sprintf(out, "threshold_max: %8d %8d %8d %8d\n", + cam->params.compressionParams.threshMax, 0, 255, 11); + out += sprintf(out, "small_step: %8d %8d %8d %8d\n", + cam->params.compressionParams.smallStep, 0, 255, 1); + out += sprintf(out, "large_step: %8d %8d %8d %8d\n", + cam->params.compressionParams.largeStep, 0, 255, 3); + out += sprintf(out, "decimation_hysteresis: %8d %8d %8d %8d\n", + cam->params.compressionParams.decimationHysteresis, + 0, 255, 2); + out += sprintf(out, "fr_diff_step_thresh: %8d %8d %8d %8d\n", + cam->params.compressionParams.frDiffStepThresh, + 0, 255, 5); + out += sprintf(out, "q_diff_step_thresh: %8d %8d %8d %8d\n", + cam->params.compressionParams.qDiffStepThresh, + 0, 255, 3); + out += sprintf(out, "decimation_thresh_mod: %8d %8d %8d %8d\n", + cam->params.compressionParams.decimationThreshMod, + 0, 255, 2); + + len = out - page; + len -= off; + if (len < count) { + *eof = 1; + if (len <= 0) return 0; + } else + len = count; + + *start = page + off; + return len; +} + +static int cpia_write_proc(struct file *file, const char *buffer, + unsigned long count, void *data) +{ + return -EINVAL; +#if 0 + struct cam_data *cam = data; + struct cam_params new_params; + int retval, find_colon; + int size = count; + unsigned long val; + u32 command_flags = 0; + u8 new_mains; + + if (down_interruptible(&cam->param_lock)) + return -ERESTARTSYS; + + /* + * Skip over leading whitespace + */ + while (count && isspace(*buffer)) { + --count; + ++buffer; + } + + memcpy(&new_params, &cam->params, sizeof(struct cam_params)); + new_mains = cam->mainsFreq; + +#define MATCH(x) \ + ({ \ + int _len = strlen(x), _ret, _colon_found; \ + _ret = (_len <= count && strncmp(buffer, x, _len) == 0); \ + if (_ret) { \ + buffer += _len; \ + count -= _len; \ + if (find_colon) { \ + _colon_found = 0; \ + while (count && (*buffer == ' ' || *buffer == '\t' || \ + (!_colon_found && *buffer == ':'))) { \ + if (*buffer == ':') \ + _colon_found = 1; \ + --count; \ + ++buffer; \ + } \ + if (!count || !_colon_found) \ + retval = -EINVAL; \ + find_colon = 0; \ + } \ + } \ + _ret; \ + }) +#define FIRMWARE_VERSION(x,y) (new_params.version.firmwareVersion == (x) && \ + new_params.version.firmwareRevision == (y)) +#define VALUE \ + ({ \ + char *_p; \ + unsigned long int _ret; \ + _ret = simple_strtoul(buffer, &_p, 0); \ + if (_p == buffer) \ + retval = -EINVAL; \ + else { \ + count -= _p - buffer; \ + buffer = _p; \ + } \ + _ret; \ + }) + + retval = 0; + while (count && !retval) { + find_colon = 1; + if (MATCH("brightness")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val <= 100) + new_params.colourParams.brightness = val; + else + retval = -EINVAL; + } + command_flags |= COMMAND_SETCOLOURPARAMS; + } else if (MATCH("contrast")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val <= 100) { + /* contrast is in steps of 8, so round*/ + val = ((val + 3) / 8) * 8; + /* 1-02 firmware limits contrast to 80*/ + if (FIRMWARE_VERSION(1,2) && val > 80) + val = 80; + + new_params.colourParams.contrast = val; + } else + retval = -EINVAL; + } + command_flags |= COMMAND_SETCOLOURPARAMS; + } else if (MATCH("saturation")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val <= 100) + new_params.colourParams.saturation = val; + else + retval = -EINVAL; + } + command_flags |= COMMAND_SETCOLOURPARAMS; + } else if (MATCH("sensor_fps")) { + if (!retval) + val = VALUE; + + if (!retval) { + /* find values so that sensorFPS is minimized, + * but >= val */ + if (val > 30) + retval = -EINVAL; + else if (val > 25) { + new_params.sensorFps.divisor = 0; + new_params.sensorFps.baserate = 1; + } else if (val > 15) { + new_params.sensorFps.divisor = 0; + new_params.sensorFps.baserate = 0; + } else if (val > 12) { + new_params.sensorFps.divisor = 1; + new_params.sensorFps.baserate = 1; + } else if (val > 7) { + new_params.sensorFps.divisor = 1; + new_params.sensorFps.baserate = 0; + } else if (val > 6) { + new_params.sensorFps.divisor = 2; + new_params.sensorFps.baserate = 1; + } else if (val > 3) { + new_params.sensorFps.divisor = 2; + new_params.sensorFps.baserate = 0; + } else { + new_params.sensorFps.divisor = 3; + /* Either base rate would work here */ + new_params.sensorFps.baserate = 1; + } + new_params.flickerControl.coarseJump = + flicker_jumps[new_mains] + [new_params.sensorFps.baserate] + [new_params.sensorFps.divisor]; + if (new_params.flickerControl.flickerMode) + command_flags |= COMMAND_SETFLICKERCTRL; + } + command_flags |= COMMAND_SETSENSORFPS; + } else if (MATCH("stream_start_line")) { + if (!retval) + val = VALUE; + + if (!retval) { + int max_line = 288; + + if (new_params.format.videoSize == VIDEOSIZE_QCIF) + max_line = 144; + if (val <= max_line) + new_params.streamStartLine = val/2; + else + retval = -EINVAL; + } + } else if (MATCH("ecp_timing")) { + if (!retval && MATCH("normal")) + new_params.ecpTiming = 0; + else if (!retval && MATCH("slow")) + new_params.ecpTiming = 1; + else + retval = -EINVAL; + + command_flags |= COMMAND_SETECPTIMING; + } else if (MATCH("color_balance_mode")) { + if (!retval && MATCH("manual")) + new_params.colourBalance.balanceModeIsAuto = 0; + else if (!retval && MATCH("auto")) + new_params.colourBalance.balanceModeIsAuto = 1; + else + retval = -EINVAL; + + command_flags |= COMMAND_SETCOLOURBALANCE; + } else if (MATCH("red_gain")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val <= 212) + new_params.colourBalance.redGain = val; + else + retval = -EINVAL; + } + command_flags |= COMMAND_SETCOLOURBALANCE; + } else if (MATCH("green_gain")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val <= 212) + new_params.colourBalance.greenGain = val; + else + retval = -EINVAL; + } + command_flags |= COMMAND_SETCOLOURBALANCE; + } else if (MATCH("blue_gain")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val <= 212) + new_params.colourBalance.blueGain = val; + else + retval = -EINVAL; + } + command_flags |= COMMAND_SETCOLOURBALANCE; + } else if (MATCH("max_gain")) { + if (!retval) + val = VALUE; + + if (!retval) { + /* 1-02 firmware limits gain to 2 */ + if (FIRMWARE_VERSION(1,2) && val > 2) + val = 2; + switch(val) { + case 1: + new_params.exposure.gainMode = 1; + break; + case 2: + new_params.exposure.gainMode = 2; + break; + case 4: + new_params.exposure.gainMode = 3; + break; + case 8: + new_params.exposure.gainMode = 4; + break; + default: + retval = -EINVAL; + break; + } + } + command_flags |= COMMAND_SETEXPOSURE; + } else if (MATCH("exposure_mode")) { + if (!retval && MATCH("auto")) + new_params.exposure.expMode = 2; + else if (!retval && MATCH("manual")) { + if (new_params.exposure.expMode == 2) + new_params.exposure.expMode = 3; + new_params.flickerControl.flickerMode = 0; + command_flags |= COMMAND_SETFLICKERCTRL; + } else + retval = -EINVAL; + + command_flags |= COMMAND_SETEXPOSURE; + } else if (MATCH("centre_weight")) { + if (!retval && MATCH("on")) + new_params.exposure.centreWeight = 1; + else if (!retval && MATCH("off")) + new_params.exposure.centreWeight = 2; + else + retval = -EINVAL; + + command_flags |= COMMAND_SETEXPOSURE; + } else if (MATCH("gain")) { + if (!retval) + val = VALUE; + + if (!retval) { + switch(val) { + case 1: + new_params.exposure.gain = 0; + new_params.exposure.expMode = 1; + new_params.flickerControl.flickerMode = 0; + command_flags |= COMMAND_SETFLICKERCTRL; + break; + case 2: + new_params.exposure.gain = 1; + new_params.exposure.expMode = 1; + new_params.flickerControl.flickerMode = 0; + command_flags |= COMMAND_SETFLICKERCTRL; + break; + case 4: + new_params.exposure.gain = 2; + new_params.exposure.expMode = 1; + new_params.flickerControl.flickerMode = 0; + command_flags |= COMMAND_SETFLICKERCTRL; + break; + case 8: + new_params.exposure.gain = 3; + new_params.exposure.expMode = 1; + new_params.flickerControl.flickerMode = 0; + command_flags |= COMMAND_SETFLICKERCTRL; + break; + default: + retval = -EINVAL; + break; + } + command_flags |= COMMAND_SETEXPOSURE; + if (new_params.exposure.gain > + new_params.exposure.gainMode-1) + retval = -EINVAL; + } + } else if (MATCH("fine_exp")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val < 256) { + /* 1-02 firmware limits fineExp to 127*/ + if (FIRMWARE_VERSION(1,2) && val > 127) + val = 127; + new_params.exposure.fineExp = val; + new_params.exposure.expMode = 1; + command_flags |= COMMAND_SETEXPOSURE; + new_params.flickerControl.flickerMode = 0; + command_flags |= COMMAND_SETFLICKERCTRL; + } else + retval = -EINVAL; + } + } else if (MATCH("coarse_exp")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val < 65536) { + /* 1-02 firmware limits + * coarseExp to 255 */ + if (FIRMWARE_VERSION(1,2) && val > 255) + val = 255; + new_params.exposure.coarseExpLo = + val & 0xff; + new_params.exposure.coarseExpHi = + val >> 8; + new_params.exposure.expMode = 1; + command_flags |= COMMAND_SETEXPOSURE; + new_params.flickerControl.flickerMode = 0; + command_flags |= COMMAND_SETFLICKERCTRL; + } else + retval = -EINVAL; + } + } else if (MATCH("red_comp")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val >= 220 && val <= 255) { + new_params.exposure.redComp = val; + command_flags |= COMMAND_SETEXPOSURE; + } else + retval = -EINVAL; + } + } else if (MATCH("green1_comp")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val >= 214 && val <= 255) { + new_params.exposure.green1Comp = val; + command_flags |= COMMAND_SETEXPOSURE; + } else + retval = -EINVAL; + } + } else if (MATCH("green2_comp")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val >= 214 && val <= 255) { + new_params.exposure.green2Comp = val; + command_flags |= COMMAND_SETEXPOSURE; + } else + retval = -EINVAL; + } + } else if (MATCH("blue_comp")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val >= 230 && val <= 255) { + new_params.exposure.blueComp = val; + command_flags |= COMMAND_SETEXPOSURE; + } else + retval = -EINVAL; + } + } else if (MATCH("apcor_gain1")) { + if (!retval) + val = VALUE; + + if (!retval) { + command_flags |= COMMAND_SETAPCOR; + if (val <= 0xff) + new_params.apcor.gain1 = val; + else + retval = -EINVAL; + } + } else if (MATCH("apcor_gain2")) { + if (!retval) + val = VALUE; + + if (!retval) { + command_flags |= COMMAND_SETAPCOR; + if (val <= 0xff) + new_params.apcor.gain2 = val; + else + retval = -EINVAL; + } + } else if (MATCH("apcor_gain4")) { + if (!retval) + val = VALUE; + + if (!retval) { + command_flags |= COMMAND_SETAPCOR; + if (val <= 0xff) + new_params.apcor.gain4 = val; + else + retval = -EINVAL; + } + } else if (MATCH("apcor_gain8")) { + if (!retval) + val = VALUE; + + if (!retval) { + command_flags |= COMMAND_SETAPCOR; + if (val <= 0xff) + new_params.apcor.gain8 = val; + else + retval = -EINVAL; + } + } else if (MATCH("vl_offset_gain1")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val <= 0xff) + new_params.vlOffset.gain1 = val; + else + retval = -EINVAL; + } + command_flags |= COMMAND_SETVLOFFSET; + } else if (MATCH("vl_offset_gain2")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val <= 0xff) + new_params.vlOffset.gain2 = val; + else + retval = -EINVAL; + } + command_flags |= COMMAND_SETVLOFFSET; + } else if (MATCH("vl_offset_gain4")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val <= 0xff) + new_params.vlOffset.gain4 = val; + else + retval = -EINVAL; + } + command_flags |= COMMAND_SETVLOFFSET; + } else if (MATCH("vl_offset_gain8")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val <= 0xff) + new_params.vlOffset.gain8 = val; + else + retval = -EINVAL; + } + command_flags |= COMMAND_SETVLOFFSET; + } else if (MATCH("flicker_control")) { + if (!retval && MATCH("on")) { + new_params.flickerControl.flickerMode = 1; + new_params.exposure.expMode = 2; + command_flags |= COMMAND_SETEXPOSURE; + } else if (!retval && MATCH("off")) + new_params.flickerControl.flickerMode = 0; + else + retval = -EINVAL; + + command_flags |= COMMAND_SETFLICKERCTRL; + } else if (MATCH("mains_frequency")) { + if (!retval && MATCH("50")) { + new_mains = 0; + new_params.flickerControl.coarseJump = + flicker_jumps[new_mains] + [new_params.sensorFps.baserate] + [new_params.sensorFps.divisor]; + if (new_params.flickerControl.flickerMode) + command_flags |= COMMAND_SETFLICKERCTRL; + } else if (!retval && MATCH("60")) { + new_mains = 1; + new_params.flickerControl.coarseJump = + flicker_jumps[new_mains] + [new_params.sensorFps.baserate] + [new_params.sensorFps.divisor]; + if (new_params.flickerControl.flickerMode) + command_flags |= COMMAND_SETFLICKERCTRL; + } else + retval = -EINVAL; + } else if (MATCH("allowable_overexposure")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val <= 0xff) { + new_params.flickerControl. + allowableOverExposure = val; + command_flags |= COMMAND_SETFLICKERCTRL; + } else + retval = -EINVAL; + } + } else if (MATCH("compression_mode")) { + if (!retval && MATCH("none")) + new_params.compression.mode = + CPIA_COMPRESSION_NONE; + else if (!retval && MATCH("auto")) + new_params.compression.mode = + CPIA_COMPRESSION_AUTO; + else if (!retval && MATCH("manual")) + new_params.compression.mode = + CPIA_COMPRESSION_MANUAL; + else + retval = -EINVAL; + + command_flags |= COMMAND_SETCOMPRESSION; + } else if (MATCH("decimation_enable")) { + if (!retval && MATCH("off")) + new_params.compression.decimation = 0; + else + retval = -EINVAL; + + command_flags |= COMMAND_SETCOMPRESSION; + } else if (MATCH("compression_target")) { + if (!retval && MATCH("quality")) + new_params.compressionTarget.frTargeting = + CPIA_COMPRESSION_TARGET_QUALITY; + else if (!retval && MATCH("framerate")) + new_params.compressionTarget.frTargeting = + CPIA_COMPRESSION_TARGET_FRAMERATE; + else + retval = -EINVAL; + + command_flags |= COMMAND_SETCOMPRESSIONTARGET; + } else if (MATCH("target_framerate")) { + if (!retval) + val = VALUE; + + if (!retval) + new_params.compressionTarget.targetFR = val; + command_flags |= COMMAND_SETCOMPRESSIONTARGET; + } else if (MATCH("target_quality")) { + if (!retval) + val = VALUE; + + if (!retval) + new_params.compressionTarget.targetQ = val; + + command_flags |= COMMAND_SETCOMPRESSIONTARGET; + } else if (MATCH("y_threshold")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val < 32) + new_params.yuvThreshold.yThreshold = val; + else + retval = -EINVAL; + } + command_flags |= COMMAND_SETYUVTHRESH; + } else if (MATCH("uv_threshold")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val < 32) + new_params.yuvThreshold.uvThreshold = val; + else + retval = -EINVAL; + } + command_flags |= COMMAND_SETYUVTHRESH; + } else if (MATCH("hysteresis")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val <= 0xff) + new_params.compressionParams.hysteresis = val; + else + retval = -EINVAL; + } + command_flags |= COMMAND_SETCOMPRESSIONPARAMS; + } else if (MATCH("threshold_max")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val <= 0xff) + new_params.compressionParams.threshMax = val; + else + retval = -EINVAL; + } + command_flags |= COMMAND_SETCOMPRESSIONPARAMS; + } else if (MATCH("small_step")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val <= 0xff) + new_params.compressionParams.smallStep = val; + else + retval = -EINVAL; + } + command_flags |= COMMAND_SETCOMPRESSIONPARAMS; + } else if (MATCH("large_step")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val <= 0xff) + new_params.compressionParams.largeStep = val; + else + retval = -EINVAL; + } + command_flags |= COMMAND_SETCOMPRESSIONPARAMS; + } else if (MATCH("decimation_hysteresis")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val <= 0xff) + new_params.compressionParams.decimationHysteresis = val; + else + retval = -EINVAL; + } + command_flags |= COMMAND_SETCOMPRESSIONPARAMS; + } else if (MATCH("fr_diff_step_thresh")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val <= 0xff) + new_params.compressionParams.frDiffStepThresh = val; + else + retval = -EINVAL; + } + command_flags |= COMMAND_SETCOMPRESSIONPARAMS; + } else if (MATCH("q_diff_step_thresh")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val <= 0xff) + new_params.compressionParams.qDiffStepThresh = val; + else + retval = -EINVAL; + } + command_flags |= COMMAND_SETCOMPRESSIONPARAMS; + } else if (MATCH("decimation_thresh_mod")) { + if (!retval) + val = VALUE; + + if (!retval) { + if (val <= 0xff) + new_params.compressionParams.decimationThreshMod = val; + else + retval = -EINVAL; + } + command_flags |= COMMAND_SETCOMPRESSIONPARAMS; + } else { + DBG("No match found\n"); + retval = -EINVAL; + } + + if (!retval) { + while (count && isspace(*buffer) && *buffer != '\n') { + --count; + ++buffer; + } + if (count) { + if (*buffer != '\n' && *buffer != ';') + retval = -EINVAL; + else { + --count; + ++buffer; + } + } + } + } +#undef MATCH +#undef FIRMWARE_VERSION +#undef VALUE +#undef FIND_VALUE +#undef FIND_END + if (!retval) { + if (command_flags & COMMAND_SETCOLOURPARAMS) { + /* Adjust cam->vp to reflect these changes */ + cam->vp.brightness = + new_params.colourParams.brightness*65535/100; + cam->vp.contrast = + new_params.colourParams.contrast*65535/100; + cam->vp.colour = + new_params.colourParams.saturation*65535/100; + } + + memcpy(&cam->params, &new_params, sizeof(struct cam_params)); + cam->mainsFreq = new_mains; + cam->cmd_queue |= command_flags; + retval = size; + } else + DBG("error: %d\n", retval); + + up(&cam->param_lock); + + return retval; +#endif +} + +static void create_proc_cpia_cam(struct cam_data *cam) +{ + char name[7]; + struct proc_dir_entry *ent; + + if (!cpia_proc_root || !cam) + return; + + sprintf(name, "video%d", cam->vdev.minor); + + ent = create_proc_entry(name, S_IFREG|S_IRUGO|S_IWUSR, cpia_proc_root); + if (!ent) + return; + + ent->data = cam; + ent->read_proc = cpia_read_proc; + ent->write_proc = cpia_write_proc; + ent->size = 3626; + cam->proc_entry = ent; +} + +static void destroy_proc_cpia_cam(struct cam_data *cam) +{ + char name[7]; + + if (!cam || !cam->proc_entry) + return; + + sprintf(name, "video%d", cam->vdev.minor); + remove_proc_entry(name, cpia_proc_root); + cam->proc_entry = NULL; +} + +static void proc_cpia_create(void) +{ + cpia_proc_root = create_proc_entry("cpia", S_IFDIR, 0); + + if (cpia_proc_root) + cpia_proc_root->owner = THIS_MODULE; + else + LOG("Unable to initialise /proc/cpia\n"); +} + +static void proc_cpia_destroy(void) +{ + remove_proc_entry("cpia", 0); +} +#endif /* CONFIG_PROC_FS */ + +/* ----------------------- debug functions ---------------------- */ + +#define printstatus(cam) \ + DBG("%02x %02x %02x %02x %02x %02x %02x %02x\n",\ + cam->params.status.systemState, cam->params.status.grabState, \ + cam->params.status.streamState, cam->params.status.fatalError, \ + cam->params.status.cmdError, cam->params.status.debugFlags, \ + cam->params.status.vpStatus, cam->params.status.errorCode); + +/* ----------------------- v4l helpers -------------------------- */ + +/* supported frame palettes and depths */ +static inline int valid_mode(u16 palette, u16 depth) +{ + return (palette == VIDEO_PALETTE_GREY && depth == 8) || + (palette == VIDEO_PALETTE_RGB555 && depth == 16) || + (palette == VIDEO_PALETTE_RGB565 && depth == 16) || + (palette == VIDEO_PALETTE_RGB24 && depth == 24) || + (palette == VIDEO_PALETTE_RGB32 && depth == 32) || + (palette == VIDEO_PALETTE_YUV422 && depth == 16) || + (palette == VIDEO_PALETTE_YUYV && depth == 16) || + (palette == VIDEO_PALETTE_UYVY && depth == 16); +} + +static int match_videosize( int width, int height ) +{ + /* return the best match, where 'best' is as always + * the largest that is not bigger than what is requested. */ + if (width>=352 && height>=288) + return VIDEOSIZE_352_288; /* CIF */ + + if (width>=320 && height>=240) + return VIDEOSIZE_320_240; /* SIF */ + + if (width>=288 && height>=216) + return VIDEOSIZE_288_216; + + if (width>=256 && height>=192) + return VIDEOSIZE_256_192; + + if (width>=224 && height>=168) + return VIDEOSIZE_224_168; + + if (width>=192 && height>=144) + return VIDEOSIZE_192_144; + + if (width>=176 && height>=144) + return VIDEOSIZE_176_144; /* QCIF */ + + if (width>=160 && height>=120) + return VIDEOSIZE_160_120; /* QSIF */ + + if (width>=128 && height>=96) + return VIDEOSIZE_128_96; + + if (width>=88 && height>=72) + return VIDEOSIZE_88_72; + + if (width>=64 && height>=48) + return VIDEOSIZE_64_48; + + if (width>=48 && height>=48) + return VIDEOSIZE_48_48; + + return -1; +} + +/* these are the capture sizes we support */ +static void set_vw_size(struct cam_data *cam) +{ + /* the col/row/start/end values are the result of simple math */ + /* study the SetROI-command in cpia developers guide p 2-22 */ + /* streamStartLine is set to the recommended value in the cpia */ + /* developers guide p 3-37 */ + switch(cam->video_size) { + case VIDEOSIZE_CIF: + cam->vw.width = 352; + cam->vw.height = 288; + cam->params.format.videoSize=VIDEOSIZE_CIF; + cam->params.roi.colStart=0; + cam->params.roi.colEnd=44; + cam->params.roi.rowStart=0; + cam->params.roi.rowEnd=72; + cam->params.streamStartLine = 120; + break; + case VIDEOSIZE_SIF: + cam->vw.width = 320; + cam->vw.height = 240; + cam->params.format.videoSize=VIDEOSIZE_CIF; + cam->params.roi.colStart=2; + cam->params.roi.colEnd=42; + cam->params.roi.rowStart=6; + cam->params.roi.rowEnd=66; + cam->params.streamStartLine = 120; + break; + case VIDEOSIZE_288_216: + cam->vw.width = 288; + cam->vw.height = 216; + cam->params.format.videoSize=VIDEOSIZE_CIF; + cam->params.roi.colStart=4; + cam->params.roi.colEnd=40; + cam->params.roi.rowStart=9; + cam->params.roi.rowEnd=63; + cam->params.streamStartLine = 120; + break; + case VIDEOSIZE_256_192: + cam->vw.width = 256; + cam->vw.height = 192; + cam->params.format.videoSize=VIDEOSIZE_CIF; + cam->params.roi.colStart=6; + cam->params.roi.colEnd=38; + cam->params.roi.rowStart=12; + cam->params.roi.rowEnd=60; + cam->params.streamStartLine = 120; + break; + case VIDEOSIZE_224_168: + cam->vw.width = 224; + cam->vw.height = 168; + cam->params.format.videoSize=VIDEOSIZE_CIF; + cam->params.roi.colStart=8; + cam->params.roi.colEnd=36; + cam->params.roi.rowStart=15; + cam->params.roi.rowEnd=57; + cam->params.streamStartLine = 120; + break; + case VIDEOSIZE_192_144: + cam->vw.width = 192; + cam->vw.height = 144; + cam->params.format.videoSize=VIDEOSIZE_CIF; + cam->params.roi.colStart=10; + cam->params.roi.colEnd=34; + cam->params.roi.rowStart=18; + cam->params.roi.rowEnd=54; + cam->params.streamStartLine = 120; + break; + case VIDEOSIZE_QCIF: + cam->vw.width = 176; + cam->vw.height = 144; + cam->params.format.videoSize=VIDEOSIZE_QCIF; + cam->params.roi.colStart=0; + cam->params.roi.colEnd=22; + cam->params.roi.rowStart=0; + cam->params.roi.rowEnd=36; + cam->params.streamStartLine = 60; + break; + case VIDEOSIZE_QSIF: + cam->vw.width = 160; + cam->vw.height = 120; + cam->params.format.videoSize=VIDEOSIZE_QCIF; + cam->params.roi.colStart=1; + cam->params.roi.colEnd=21; + cam->params.roi.rowStart=3; + cam->params.roi.rowEnd=33; + cam->params.streamStartLine = 60; + break; + case VIDEOSIZE_128_96: + cam->vw.width = 128; + cam->vw.height = 96; + cam->params.format.videoSize=VIDEOSIZE_QCIF; + cam->params.roi.colStart=3; + cam->params.roi.colEnd=19; + cam->params.roi.rowStart=6; + cam->params.roi.rowEnd=30; + cam->params.streamStartLine = 60; + break; + case VIDEOSIZE_88_72: + cam->vw.width = 88; + cam->vw.height = 72; + cam->params.format.videoSize=VIDEOSIZE_QCIF; + cam->params.roi.colStart=5; + cam->params.roi.colEnd=16; + cam->params.roi.rowStart=9; + cam->params.roi.rowEnd=27; + cam->params.streamStartLine = 60; + break; + case VIDEOSIZE_64_48: + cam->vw.width = 64; + cam->vw.height = 48; + cam->params.format.videoSize=VIDEOSIZE_QCIF; + cam->params.roi.colStart=7; + cam->params.roi.colEnd=15; + cam->params.roi.rowStart=12; + cam->params.roi.rowEnd=24; + cam->params.streamStartLine = 60; + break; + case VIDEOSIZE_48_48: + cam->vw.width = 48; + cam->vw.height = 48; + cam->params.format.videoSize=VIDEOSIZE_QCIF; + cam->params.roi.colStart=8; + cam->params.roi.colEnd=14; + cam->params.roi.rowStart=6; + cam->params.roi.rowEnd=30; + cam->params.streamStartLine = 60; + break; + default: + LOG("bad videosize value: %d\n", cam->video_size); + } + + return; +} + +static int allocate_frame_buf(struct cam_data *cam) +{ + int i; + + cam->frame_buf = rvmalloc(FRAME_NUM * CPIA_MAX_FRAME_SIZE); + if (!cam->frame_buf) + return -ENOBUFS; + + for (i = 0; i < FRAME_NUM; i++) + cam->frame[i].data = cam->frame_buf + i * CPIA_MAX_FRAME_SIZE; + + return 0; +} + +static int free_frame_buf(struct cam_data *cam) +{ + int i; + + rvfree(cam->frame_buf, FRAME_NUM*CPIA_MAX_FRAME_SIZE); + cam->frame_buf = 0; + for (i=0; i < FRAME_NUM; i++) + cam->frame[i].data = NULL; + + return 0; +} + + +static void inline free_frames(struct cpia_frame frame[FRAME_NUM]) +{ + int i; + + for (i=0; i < FRAME_NUM; i++) + frame[i].state = FRAME_UNUSED; + return; +} + +/********************************************************************** + * + * General functions + * + **********************************************************************/ +/* send an arbitrary command to the camera */ +static int do_command(struct cam_data *cam, u16 command, u8 a, u8 b, u8 c, u8 d) +{ + int retval, datasize; + u8 cmd[8], data[8]; + + switch(command) { + case CPIA_COMMAND_GetCPIAVersion: + case CPIA_COMMAND_GetPnPID: + case CPIA_COMMAND_GetCameraStatus: + case CPIA_COMMAND_GetVPVersion: + datasize=8; + break; + case CPIA_COMMAND_GetColourParams: + case CPIA_COMMAND_GetColourBalance: + case CPIA_COMMAND_GetExposure: + down(&cam->param_lock); + datasize=8; + break; + default: + datasize=0; + break; + } + + cmd[0] = command>>8; + cmd[1] = command&0xff; + cmd[2] = a; + cmd[3] = b; + cmd[4] = c; + cmd[5] = d; + cmd[6] = datasize; + cmd[7] = 0; + + retval = cam->ops->transferCmd(cam->lowlevel_data, cmd, data); + if (retval) { + DBG("%x - failed, retval=%d\n", command, retval); + if (command == CPIA_COMMAND_GetColourParams || + command == CPIA_COMMAND_GetColourBalance || + command == CPIA_COMMAND_GetExposure) + up(&cam->param_lock); + } else { + switch(command) { + case CPIA_COMMAND_GetCPIAVersion: + cam->params.version.firmwareVersion = data[0]; + cam->params.version.firmwareRevision = data[1]; + cam->params.version.vcVersion = data[2]; + cam->params.version.vcRevision = data[3]; + break; + case CPIA_COMMAND_GetPnPID: + cam->params.pnpID.vendor = data[0]+(((u16)data[1])<<8); + cam->params.pnpID.product = data[2]+(((u16)data[3])<<8); + cam->params.pnpID.deviceRevision = + data[4]+(((u16)data[5])<<8); + break; + case CPIA_COMMAND_GetCameraStatus: + cam->params.status.systemState = data[0]; + cam->params.status.grabState = data[1]; + cam->params.status.streamState = data[2]; + cam->params.status.fatalError = data[3]; + cam->params.status.cmdError = data[4]; + cam->params.status.debugFlags = data[5]; + cam->params.status.vpStatus = data[6]; + cam->params.status.errorCode = data[7]; + break; + case CPIA_COMMAND_GetVPVersion: + cam->params.vpVersion.vpVersion = data[0]; + cam->params.vpVersion.vpRevision = data[1]; + cam->params.vpVersion.cameraHeadID = + data[2]+(((u16)data[3])<<8); + break; + case CPIA_COMMAND_GetColourParams: + cam->params.colourParams.brightness = data[0]; + cam->params.colourParams.contrast = data[1]; + cam->params.colourParams.saturation = data[2]; + up(&cam->param_lock); + break; + case CPIA_COMMAND_GetColourBalance: + cam->params.colourBalance.redGain = data[0]; + cam->params.colourBalance.greenGain = data[1]; + cam->params.colourBalance.blueGain = data[2]; + up(&cam->param_lock); + break; + case CPIA_COMMAND_GetExposure: + cam->params.exposure.gain = data[0]; + cam->params.exposure.fineExp = data[1]; + cam->params.exposure.coarseExpLo = data[2]; + cam->params.exposure.coarseExpHi = data[3]; + cam->params.exposure.redComp = data[4]; + cam->params.exposure.green1Comp = data[5]; + cam->params.exposure.green2Comp = data[6]; + cam->params.exposure.blueComp = data[7]; + /* If the *Comp parameters are wacko, generate + * a warning, and reset them back to default + * values. - rich@annexia.org + */ + if (cam->params.exposure.redComp < 220 || + cam->params.exposure.redComp > 255 || + cam->params.exposure.green1Comp < 214 || + cam->params.exposure.green1Comp > 255 || + cam->params.exposure.green2Comp < 214 || + cam->params.exposure.green2Comp > 255 || + cam->params.exposure.blueComp < 230 || + cam->params.exposure.blueComp > 255) + { + printk (KERN_WARNING "*_comp parameters have gone AWOL (%d/%d/%d/%d) - reseting them\n", + cam->params.exposure.redComp, + cam->params.exposure.green1Comp, + cam->params.exposure.green2Comp, + cam->params.exposure.blueComp); + cam->params.exposure.redComp = 220; + cam->params.exposure.green1Comp = 214; + cam->params.exposure.green2Comp = 214; + cam->params.exposure.blueComp = 230; + } + up(&cam->param_lock); + break; + default: + break; + } + } + return retval; +} + +/* send a command to the camera with an additional data transaction */ +static int do_command_extended(struct cam_data *cam, u16 command, + u8 a, u8 b, u8 c, u8 d, + u8 e, u8 f, u8 g, u8 h, + u8 i, u8 j, u8 k, u8 l) +{ + int retval; + u8 cmd[8], data[8]; + + cmd[0] = command>>8; + cmd[1] = command&0xff; + cmd[2] = a; + cmd[3] = b; + cmd[4] = c; + cmd[5] = d; + cmd[6] = 8; + cmd[7] = 0; + data[0] = e; + data[1] = f; + data[2] = g; + data[3] = h; + data[4] = i; + data[5] = j; + data[6] = k; + data[7] = l; + + retval = cam->ops->transferCmd(cam->lowlevel_data, cmd, data); + if (retval) + LOG("%x - failed\n", command); + + return retval; +} + +/********************************************************************** + * + * Colorspace conversion + * + **********************************************************************/ +#define LIMIT(x) ((((x)>0xffffff)?0xff0000:(((x)<=0xffff)?0:(x)&0xff0000))>>16) + +static int yuvconvert(unsigned char *yuv, unsigned char *rgb, int out_fmt, + int in_uyvy, int mmap_kludge) +{ + int y, u, v, r, g, b, y1; + + switch(out_fmt) { + case VIDEO_PALETTE_RGB555: + case VIDEO_PALETTE_RGB565: + case VIDEO_PALETTE_RGB24: + case VIDEO_PALETTE_RGB32: + if (in_uyvy) { + u = *yuv++ - 128; + y = (*yuv++ - 16) * 76310; + v = *yuv++ - 128; + y1 = (*yuv - 16) * 76310; + } else { + y = (*yuv++ - 16) * 76310; + u = *yuv++ - 128; + y1 = (*yuv++ - 16) * 76310; + v = *yuv - 128; + } + r = 104635 * v; + g = -25690 * u + -53294 * v; + b = 132278 * u; + break; + default: + y = *yuv++; + u = *yuv++; + y1 = *yuv++; + v = *yuv; + /* Just to avoid compiler warnings */ + r = 0; + g = 0; + b = 0; + break; + } + switch(out_fmt) { + case VIDEO_PALETTE_RGB555: + *rgb++ = ((LIMIT(g+y) & 0xf8) << 2) | (LIMIT(b+y) >> 3); + *rgb++ = ((LIMIT(r+y) & 0xf8) >> 1) | (LIMIT(g+y) >> 6); + *rgb++ = ((LIMIT(g+y1) & 0xf8) << 2) | (LIMIT(b+y1) >> 3); + *rgb = ((LIMIT(r+y1) & 0xf8) >> 1) | (LIMIT(g+y1) >> 6); + return 4; + case VIDEO_PALETTE_RGB565: + *rgb++ = ((LIMIT(g+y) & 0xfc) << 3) | (LIMIT(b+y) >> 3); + *rgb++ = (LIMIT(r+y) & 0xf8) | (LIMIT(g+y) >> 5); + *rgb++ = ((LIMIT(g+y1) & 0xfc) << 3) | (LIMIT(b+y1) >> 3); + *rgb = (LIMIT(r+y1) & 0xf8) | (LIMIT(g+y1) >> 5); + return 4; + case VIDEO_PALETTE_RGB24: + if (mmap_kludge) { + *rgb++ = LIMIT(b+y); + *rgb++ = LIMIT(g+y); + *rgb++ = LIMIT(r+y); + *rgb++ = LIMIT(b+y1); + *rgb++ = LIMIT(g+y1); + *rgb = LIMIT(r+y1); + } else { + *rgb++ = LIMIT(r+y); + *rgb++ = LIMIT(g+y); + *rgb++ = LIMIT(b+y); + *rgb++ = LIMIT(r+y1); + *rgb++ = LIMIT(g+y1); + *rgb = LIMIT(b+y1); + } + return 6; + case VIDEO_PALETTE_RGB32: + if (mmap_kludge) { + *rgb++ = LIMIT(b+y); + *rgb++ = LIMIT(g+y); + *rgb++ = LIMIT(r+y); + rgb++; + *rgb++ = LIMIT(b+y1); + *rgb++ = LIMIT(g+y1); + *rgb = LIMIT(r+y1); + } else { + *rgb++ = LIMIT(r+y); + *rgb++ = LIMIT(g+y); + *rgb++ = LIMIT(b+y); + rgb++; + *rgb++ = LIMIT(r+y1); + *rgb++ = LIMIT(g+y1); + *rgb = LIMIT(b+y1); + } + return 8; + case VIDEO_PALETTE_GREY: + *rgb++ = y; + *rgb = y1; + return 2; + case VIDEO_PALETTE_YUV422: + case VIDEO_PALETTE_YUYV: + *rgb++ = y; + *rgb++ = u; + *rgb++ = y1; + *rgb = v; + return 4; + case VIDEO_PALETTE_UYVY: + *rgb++ = u; + *rgb++ = y; + *rgb++ = v; + *rgb = y1; + return 4; + default: + DBG("Empty: %d\n", out_fmt); + return 0; + } +} + +static int skipcount(int count, int fmt) +{ + switch(fmt) { + case VIDEO_PALETTE_GREY: + case VIDEO_PALETTE_RGB555: + case VIDEO_PALETTE_RGB565: + case VIDEO_PALETTE_YUV422: + case VIDEO_PALETTE_YUYV: + case VIDEO_PALETTE_UYVY: + return 2*count; + case VIDEO_PALETTE_RGB24: + return 3*count; + case VIDEO_PALETTE_RGB32: + return 4*count; + default: + return 0; + } +} + +static int parse_picture(struct cam_data *cam, int size) +{ + u8 *obuf, *ibuf, *end_obuf; + int ll, in_uyvy, compressed, origsize, out_fmt; + + /* make sure params don't change while we are decoding */ + down(&cam->param_lock); + + obuf = cam->decompressed_frame.data; + end_obuf = obuf+CPIA_MAX_FRAME_SIZE; + ibuf = cam->raw_image; + origsize = size; + out_fmt = cam->vp.palette; + + if ((ibuf[0] != MAGIC_0) || (ibuf[1] != MAGIC_1)) { + LOG("header not found\n"); + up(&cam->param_lock); + return -1; + } + + if ((ibuf[16] != VIDEOSIZE_QCIF) && (ibuf[16] != VIDEOSIZE_CIF)) { + LOG("wrong video size\n"); + up(&cam->param_lock); + return -1; + } + + if (ibuf[17] != SUBSAMPLE_422) { + LOG("illegal subtype %d\n",ibuf[17]); + up(&cam->param_lock); + return -1; + } + + if (ibuf[18] != YUVORDER_YUYV && ibuf[18] != YUVORDER_UYVY) { + LOG("illegal yuvorder %d\n",ibuf[18]); + up(&cam->param_lock); + return -1; + } + in_uyvy = ibuf[18] == YUVORDER_UYVY; + +#if 0 + /* FIXME: ROI mismatch occurs when switching capture sizes */ + if ((ibuf[24] != cam->params.roi.colStart) || + (ibuf[25] != cam->params.roi.colEnd) || + (ibuf[26] != cam->params.roi.rowStart) || + (ibuf[27] != cam->params.roi.rowEnd)) { + LOG("ROI mismatch\n"); + up(&cam->param_lock); + return -1; + } +#endif + + if ((ibuf[28] != NOT_COMPRESSED) && (ibuf[28] != COMPRESSED)) { + LOG("illegal compression %d\n",ibuf[28]); + up(&cam->param_lock); + return -1; + } + compressed = (ibuf[28] == COMPRESSED); + + if (ibuf[29] != NO_DECIMATION) { + LOG("decimation not supported\n"); + up(&cam->param_lock); + return -1; + } + + cam->params.yuvThreshold.yThreshold = ibuf[30]; + cam->params.yuvThreshold.uvThreshold = ibuf[31]; + cam->params.status.systemState = ibuf[32]; + cam->params.status.grabState = ibuf[33]; + cam->params.status.streamState = ibuf[34]; + cam->params.status.fatalError = ibuf[35]; + cam->params.status.cmdError = ibuf[36]; + cam->params.status.debugFlags = ibuf[37]; + cam->params.status.vpStatus = ibuf[38]; + cam->params.status.errorCode = ibuf[39]; + cam->fps = ibuf[41]; + up(&cam->param_lock); + + ibuf += FRAME_HEADER_SIZE; + size -= FRAME_HEADER_SIZE; + ll = ibuf[0] | (ibuf[1] << 8); + ibuf += 2; + + while (size > 0) { + size -= (ll+2); + if (size < 0) { + LOG("Insufficient data in buffer\n"); + return -1; + } + + while (ll > 1) { + if (!compressed || (compressed && !(*ibuf & 1))) { + obuf += yuvconvert(ibuf, obuf, out_fmt, + in_uyvy, cam->mmap_kludge); + ibuf += 4; + ll -= 4; + } else { + /*skip compressed interval from previous frame*/ + int skipsize = skipcount(*ibuf >> 1, out_fmt); + obuf += skipsize; + if (obuf > end_obuf) { + LOG("Insufficient data in buffer\n"); + return -1; + } + ++ibuf; + ll--; + } + } + if (ll == 1) { + if (*ibuf != EOL) { + LOG("EOL not found giving up after %d/%d" + " bytes\n", origsize-size, origsize); + return -1; + } + + ibuf++; /* skip over EOL */ + + if ((size > 3) && (ibuf[0] == EOI) && (ibuf[1] == EOI) && + (ibuf[2] == EOI) && (ibuf[3] == EOI)) { + size -= 4; + break; + } + + if (size > 1) { + ll = ibuf[0] | (ibuf[1] << 8); + ibuf += 2; /* skip over line length */ + } + } else { + LOG("line length was not 1 but %d after %d/%d bytes\n", + ll, origsize-size, origsize); + return -1; + } + } + + cam->decompressed_frame.count = obuf-cam->decompressed_frame.data; + + return cam->decompressed_frame.count; +} + +/* InitStreamCap wrapper to select correct start line */ +static inline int init_stream_cap(struct cam_data *cam) +{ + return do_command(cam, CPIA_COMMAND_InitStreamCap, + 0, cam->params.streamStartLine, 0, 0); +} + +/* update various camera modes and settings */ +static void dispatch_commands(struct cam_data *cam) +{ + down(&cam->param_lock); + if (cam->cmd_queue==COMMAND_NONE) { + up(&cam->param_lock); + return; + } + DEB_BYTE(cam->cmd_queue); + DEB_BYTE(cam->cmd_queue>>8); + if (cam->cmd_queue & COMMAND_SETCOLOURPARAMS) + do_command(cam, CPIA_COMMAND_SetColourParams, + cam->params.colourParams.brightness, + cam->params.colourParams.contrast, + cam->params.colourParams.saturation, 0); + + if (cam->cmd_queue & COMMAND_SETCOMPRESSION) + do_command(cam, CPIA_COMMAND_SetCompression, + cam->params.compression.mode, + cam->params.compression.decimation, 0, 0); + + if (cam->cmd_queue & COMMAND_SETFORMAT) { + do_command(cam, CPIA_COMMAND_SetFormat, + cam->params.format.videoSize, + cam->params.format.subSample, + cam->params.format.yuvOrder, 0); + do_command(cam, CPIA_COMMAND_SetROI, + cam->params.roi.colStart, cam->params.roi.colEnd, + cam->params.roi.rowStart, cam->params.roi.rowEnd); + cam->first_frame = 1; + } + + if (cam->cmd_queue & COMMAND_SETCOMPRESSIONTARGET) + do_command(cam, CPIA_COMMAND_SetCompressionTarget, + cam->params.compressionTarget.frTargeting, + cam->params.compressionTarget.targetFR, + cam->params.compressionTarget.targetQ, 0); + + if (cam->cmd_queue & COMMAND_SETYUVTHRESH) + do_command(cam, CPIA_COMMAND_SetYUVThresh, + cam->params.yuvThreshold.yThreshold, + cam->params.yuvThreshold.uvThreshold, 0, 0); + + if (cam->cmd_queue & COMMAND_SETECPTIMING) + do_command(cam, CPIA_COMMAND_SetECPTiming, + cam->params.ecpTiming, 0, 0, 0); + + if (cam->cmd_queue & COMMAND_SETCOMPRESSIONPARAMS) + do_command_extended(cam, CPIA_COMMAND_SetCompressionParams, + 0, 0, 0, 0, + cam->params.compressionParams.hysteresis, + cam->params.compressionParams.threshMax, + cam->params.compressionParams.smallStep, + cam->params.compressionParams.largeStep, + cam->params.compressionParams.decimationHysteresis, + cam->params.compressionParams.frDiffStepThresh, + cam->params.compressionParams.qDiffStepThresh, + cam->params.compressionParams.decimationThreshMod); + + if (cam->cmd_queue & COMMAND_SETEXPOSURE) + do_command_extended(cam, CPIA_COMMAND_SetExposure, + cam->params.exposure.gainMode, + cam->params.exposure.expMode, + cam->params.exposure.compMode, + cam->params.exposure.centreWeight, + cam->params.exposure.gain, + cam->params.exposure.fineExp, + cam->params.exposure.coarseExpLo, + cam->params.exposure.coarseExpHi, + cam->params.exposure.redComp, + cam->params.exposure.green1Comp, + cam->params.exposure.green2Comp, + cam->params.exposure.blueComp); + + if (cam->cmd_queue & COMMAND_SETCOLOURBALANCE) { + if (cam->params.colourBalance.balanceModeIsAuto) { + do_command(cam, CPIA_COMMAND_SetColourBalance, + 2, 0, 0, 0); + } else { + do_command(cam, CPIA_COMMAND_SetColourBalance, + 1, + cam->params.colourBalance.redGain, + cam->params.colourBalance.greenGain, + cam->params.colourBalance.blueGain); + do_command(cam, CPIA_COMMAND_SetColourBalance, + 3, 0, 0, 0); + } + } + + if (cam->cmd_queue & COMMAND_SETSENSORFPS) + do_command(cam, CPIA_COMMAND_SetSensorFPS, + cam->params.sensorFps.divisor, + cam->params.sensorFps.baserate, 0, 0); + + if (cam->cmd_queue & COMMAND_SETAPCOR) + do_command(cam, CPIA_COMMAND_SetApcor, + cam->params.apcor.gain1, + cam->params.apcor.gain2, + cam->params.apcor.gain4, + cam->params.apcor.gain8); + + if (cam->cmd_queue & COMMAND_SETFLICKERCTRL) + do_command(cam, CPIA_COMMAND_SetFlickerCtrl, + cam->params.flickerControl.flickerMode, + cam->params.flickerControl.coarseJump, + cam->params.flickerControl.allowableOverExposure, 0); + + if (cam->cmd_queue & COMMAND_SETVLOFFSET) + do_command(cam, CPIA_COMMAND_SetVLOffset, + cam->params.vlOffset.gain1, + cam->params.vlOffset.gain2, + cam->params.vlOffset.gain4, + cam->params.vlOffset.gain8); + + if (cam->cmd_queue & COMMAND_PAUSE) + do_command(cam, CPIA_COMMAND_EndStreamCap, 0, 0, 0, 0); + + if (cam->cmd_queue & COMMAND_RESUME) + init_stream_cap(cam); + + up(&cam->param_lock); + cam->cmd_queue = COMMAND_NONE; + return; +} + +/* kernel thread function to read image from camera */ +static void fetch_frame(void *data) +{ + int image_size, retry; + struct cam_data *cam = (struct cam_data *)data; + unsigned long oldjif, rate, diff; + + /* Allow up to two bad images in a row to be read and + * ignored before an error is reported */ + for (retry = 0; retry < 3; ++retry) { + if (retry) + DBG("retry=%d\n", retry); + + if (!cam->ops) + continue; + + /* load first frame always uncompressed */ + if (cam->first_frame && + cam->params.compression.mode != CPIA_COMPRESSION_NONE) + do_command(cam, CPIA_COMMAND_SetCompression, + CPIA_COMPRESSION_NONE, + NO_DECIMATION, 0, 0); + + /* init camera upload */ + if (do_command(cam, CPIA_COMMAND_SetGrabMode, + CPIA_GRAB_CONTINUOUS, 0, 0, 0)) + continue; + + if (do_command(cam, CPIA_COMMAND_GrabFrame, 0, + cam->params.streamStartLine, 0, 0)) + continue; + + if (cam->ops->wait_for_stream_ready) { + /* loop until image ready */ + do_command(cam, CPIA_COMMAND_GetCameraStatus,0,0,0,0); + while (cam->params.status.streamState != STREAM_READY) { + if (current->need_resched) + schedule(); + + current->state = TASK_INTERRUPTIBLE; + + /* sleep for 10 ms, hopefully ;) */ + schedule_timeout(10*HZ/1000); + if (signal_pending(current)) + return; + + do_command(cam, CPIA_COMMAND_GetCameraStatus, + 0, 0, 0, 0); + } + } + + /* grab image from camera */ + if (current->need_resched) + schedule(); + + oldjif = jiffies; + image_size = cam->ops->streamRead(cam->lowlevel_data, + cam->raw_image, 0); + if (image_size <= 0) { + DBG("streamRead failed: %d\n", image_size); + continue; + } + + rate = image_size * HZ / 1024; + diff = jiffies-oldjif; + cam->transfer_rate = diff==0 ? rate : rate/diff; + /* diff==0 ? unlikely but possible */ + + /* camera idle now so dispatch queued commands */ + dispatch_commands(cam); + + /* Update our knowledge of the camera state - FIXME: necessary? */ + do_command(cam, CPIA_COMMAND_GetColourBalance, 0, 0, 0, 0); + do_command(cam, CPIA_COMMAND_GetExposure, 0, 0, 0, 0); + + /* decompress and convert image to by copying it from + * raw_image to decompressed_frame + */ + if (current->need_resched) + schedule(); + + cam->image_size = parse_picture(cam, image_size); + if (cam->image_size <= 0) + DBG("parse_picture failed %d\n", cam->image_size); + else + break; + } + + if (retry < 3) { + /* FIXME: this only works for double buffering */ + if (cam->frame[cam->curframe].state == FRAME_READY) { + memcpy(cam->frame[cam->curframe].data, + cam->decompressed_frame.data, + cam->decompressed_frame.count); + cam->frame[cam->curframe].state = FRAME_DONE; + } else + cam->decompressed_frame.state = FRAME_DONE; + +#if 0 + if (cam->first_frame && + cam->params.compression.mode != CPIA_COMPRESSION_NONE) { + cam->first_frame = 0; + cam->cmd_queue |= COMMAND_SETCOMPRESSION; + } +#else + if (cam->first_frame) { + cam->first_frame = 0; + cam->cmd_queue |= COMMAND_SETCOMPRESSION; + cam->cmd_queue |= COMMAND_SETEXPOSURE; + } +#endif + } +} + +static int capture_frame(struct cam_data *cam, struct video_mmap *vm) +{ + int retval = 0; + + if (!cam->frame_buf) { + /* we do lazy allocation */ + if ((retval = allocate_frame_buf(cam))) + return retval; + } + + /* FIXME: the first frame seems to be captured by the camera + without regards to any initial settings, so we throw away + that one, the next one is generated with our settings + (exposure, color balance, ...) + */ + if (cam->first_frame) { + cam->curframe = vm->frame; + cam->frame[cam->curframe].state = FRAME_READY; + fetch_frame(cam); + if (cam->frame[cam->curframe].state != FRAME_DONE) + retval = -EIO; + } + cam->curframe = vm->frame; + cam->frame[cam->curframe].state = FRAME_READY; + fetch_frame(cam); + if (cam->frame[cam->curframe].state != FRAME_DONE) + retval=-EIO; + + return retval; +} + +static int goto_high_power(struct cam_data *cam) +{ + if (do_command(cam, CPIA_COMMAND_GotoHiPower, 0, 0, 0, 0)) + return -1; + mdelay(100); /* windows driver does it too */ + if (do_command(cam, CPIA_COMMAND_GetCameraStatus, 0, 0, 0, 0)) + return -1; + if (cam->params.status.systemState == HI_POWER_STATE) { + DBG("camera now in HIGH power state\n"); + return 0; + } + printstatus(cam); + return -1; +} + +static int goto_low_power(struct cam_data *cam) +{ + if (do_command(cam, CPIA_COMMAND_GotoLoPower, 0, 0, 0, 0)) + return -1; + if (do_command(cam, CPIA_COMMAND_GetCameraStatus, 0, 0, 0, 0)) + return -1; + if (cam->params.status.systemState == LO_POWER_STATE) { + DBG("camera now in LOW power state\n"); + return 0; + } + printstatus(cam); + return -1; +} + +static void save_camera_state(struct cam_data *cam) +{ + do_command(cam, CPIA_COMMAND_GetColourBalance, 0, 0, 0, 0); + do_command(cam, CPIA_COMMAND_GetExposure, 0, 0, 0, 0); + + DBG("%d/%d/%d/%d/%d/%d/%d/%d\n", + cam->params.exposure.gain, + cam->params.exposure.fineExp, + cam->params.exposure.coarseExpLo, + cam->params.exposure.coarseExpHi, + cam->params.exposure.redComp, + cam->params.exposure.green1Comp, + cam->params.exposure.green2Comp, + cam->params.exposure.blueComp); + DBG("%d/%d/%d\n", + cam->params.colourBalance.redGain, + cam->params.colourBalance.greenGain, + cam->params.colourBalance.blueGain); +} + +static void set_camera_state(struct cam_data *cam) +{ + if(cam->params.colourBalance.balanceModeIsAuto) { + do_command(cam, CPIA_COMMAND_SetColourBalance, + 2, 0, 0, 0); + } else { + do_command(cam, CPIA_COMMAND_SetColourBalance, + 1, + cam->params.colourBalance.redGain, + cam->params.colourBalance.greenGain, + cam->params.colourBalance.blueGain); + do_command(cam, CPIA_COMMAND_SetColourBalance, + 3, 0, 0, 0); + } + + + do_command_extended(cam, CPIA_COMMAND_SetExposure, + cam->params.exposure.gainMode, 1, 1, + cam->params.exposure.centreWeight, + cam->params.exposure.gain, + cam->params.exposure.fineExp, + cam->params.exposure.coarseExpLo, + cam->params.exposure.coarseExpHi, + cam->params.exposure.redComp, + cam->params.exposure.green1Comp, + cam->params.exposure.green2Comp, + cam->params.exposure.blueComp); + do_command_extended(cam, CPIA_COMMAND_SetExposure, + 0, 3, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0); + + if (!cam->params.exposure.gainMode) + cam->params.exposure.gainMode = 2; + if (!cam->params.exposure.expMode) + cam->params.exposure.expMode = 2; + if (!cam->params.exposure.centreWeight) + cam->params.exposure.centreWeight = 1; + + cam->cmd_queue = COMMAND_SETCOMPRESSION | + COMMAND_SETCOMPRESSIONTARGET | + COMMAND_SETCOLOURPARAMS | + COMMAND_SETFORMAT | + COMMAND_SETYUVTHRESH | + COMMAND_SETECPTIMING | + COMMAND_SETCOMPRESSIONPARAMS | +#if 0 + COMMAND_SETEXPOSURE | +#endif + COMMAND_SETCOLOURBALANCE | + COMMAND_SETSENSORFPS | + COMMAND_SETAPCOR | + COMMAND_SETFLICKERCTRL | + COMMAND_SETVLOFFSET; + dispatch_commands(cam); + save_camera_state(cam); + + return; +} + +static void get_version_information(struct cam_data *cam) +{ + /* GetCPIAVersion */ + do_command(cam, CPIA_COMMAND_GetCPIAVersion, 0, 0, 0, 0); + + /* GetPnPID */ + do_command(cam, CPIA_COMMAND_GetPnPID, 0, 0, 0, 0); +} + +/* initialize camera */ +static int reset_camera(struct cam_data *cam) +{ + /* Start the camera in low power mode */ + if (goto_low_power(cam)) { + if (cam->params.status.systemState != WARM_BOOT_STATE) + return -ENODEV; + + /* FIXME: this is just dirty trial and error */ + reset_camera_struct(cam); + goto_high_power(cam); + do_command(cam, CPIA_COMMAND_DiscardFrame, 0, 0, 0, 0); + if (goto_low_power(cam)) + return -NODEV; + } + + /* procedure described in developer's guide p3-28 */ + + /* Check the firmware version FIXME: should we check PNPID? */ + cam->params.version.firmwareVersion = 0; + get_version_information(cam); + if (cam->params.version.firmwareVersion != 1) + return -ENODEV; + + /* The fatal error checking should be done after + * the camera powers up (developer's guide p 3-38) */ + + /* Set streamState before transition to high power to avoid bug + * in firmware 1-02 */ + do_command(cam, CPIA_COMMAND_ModifyCameraStatus, STREAMSTATE, 0, + STREAM_NOT_READY, 0); + + /* GotoHiPower */ + if (goto_high_power(cam)) + return -ENODEV; + + /* Check the camera status */ + if (do_command(cam, CPIA_COMMAND_GetCameraStatus, 0, 0, 0, 0)) + return -EIO; + + if (cam->params.status.fatalError) { + DBG("fatal_error: %#04x\n", + cam->params.status.fatalError); + DBG("vp_status: %#04x\n", + cam->params.status.vpStatus); + if (cam->params.status.fatalError & ~(COM_FLAG|CPIA_FLAG)) { + /* Fatal error in camera */ + return -EIO; + } else if (cam->params.status.fatalError & (COM_FLAG|CPIA_FLAG)) { + /* Firmware 1-02 may do this for parallel port cameras, + * just clear the flags (developer's guide p 3-38) */ + do_command(cam, CPIA_COMMAND_ModifyCameraStatus, + FATALERROR, ~(COM_FLAG|CPIA_FLAG), 0, 0); + } + } + + /* Check the camera status again */ + if (cam->params.status.fatalError) { + if (cam->params.status.fatalError) + return -EIO; + } + + /* VPVersion can't be retrieved before the camera is in HiPower, + * so get it here instead of in get_version_information. */ + do_command(cam, CPIA_COMMAND_GetVPVersion, 0, 0, 0, 0); + + /* set camera to a known state */ + set_camera_state(cam); + + return 0; +} + +/* ------------------------- V4L interface --------------------- */ +static int cpia_open(struct video_device *dev, int flags) +{ + int i; + struct cam_data *cam = dev->priv; + + if (!cam) { + DBG("Internal error, cam_data not found!\n"); + return -EBUSY; + } + + if (cam->open_count > 0) { + DBG("Camera already open\n"); + return -EBUSY; + } + + if (!cam->raw_image) { + cam->raw_image = rvmalloc(CPIA_MAX_IMAGE_SIZE); + if (!cam->raw_image) + return -ENOMEM; + } + + if (!cam->decompressed_frame.data) { + cam->decompressed_frame.data = rvmalloc(CPIA_MAX_FRAME_SIZE); + if (!cam->decompressed_frame.data) { + rvfree(cam->raw_image, CPIA_MAX_IMAGE_SIZE); + cam->raw_image = NULL; + return -ENOMEM; + } + } + + /* open cpia */ + if (cam->ops->open(cam->lowlevel_data)) { + rvfree(cam->decompressed_frame.data, CPIA_MAX_FRAME_SIZE); + cam->decompressed_frame.data = NULL; + rvfree(cam->raw_image, CPIA_MAX_IMAGE_SIZE); + cam->raw_image = NULL; + return -ENODEV; + } + + /* reset the camera */ + if ((i = reset_camera(cam)) != 0) { + cam->ops->close(cam->lowlevel_data); + rvfree(cam->decompressed_frame.data, CPIA_MAX_FRAME_SIZE); + cam->decompressed_frame.data = NULL; + rvfree(cam->raw_image, CPIA_MAX_IMAGE_SIZE); + cam->raw_image = NULL; + return i; + } + + /* Set ownership of /proc/cpia/videoX to current user */ + if(cam->proc_entry) + cam->proc_entry->uid = current->uid; + + /* set mark for loading first frame uncompressed */ + cam->first_frame = 1; + + /* init it to something */ + cam->mmap_kludge = 0; + + ++cam->open_count; + return 0; +} + +static void cpia_close(struct video_device *dev) +{ + struct cam_data *cam; + + cam = dev->priv; + + if (cam->ops) { + /* Return ownership of /proc/cpia/videoX to root */ + if(cam->proc_entry) + cam->proc_entry->uid = 0; + + /* save camera state for later open (developers guide ch 3.5.3) */ + save_camera_state(cam); + + /* GotoLoPower */ + goto_low_power(cam); + + /* Update the camera ststus */ + do_command(cam, CPIA_COMMAND_GetCameraStatus, 0, 0, 0, 0); + + /* cleanup internal state stuff */ + free_frames(cam->frame); + + /* close cpia */ + cam->ops->close(cam->lowlevel_data); + } + + if (--cam->open_count == 0) { + /* clean up capture-buffers */ + if (cam->raw_image) { + rvfree(cam->raw_image, CPIA_MAX_IMAGE_SIZE); + cam->raw_image = NULL; + } + + if (cam->decompressed_frame.data) { + rvfree(cam->decompressed_frame.data, CPIA_MAX_FRAME_SIZE); + cam->decompressed_frame.data = NULL; + } + + if (cam->frame_buf) + free_frame_buf(cam); + + if (!cam->ops) { + video_unregister_device(dev); + kfree(cam); + } + } + + + return; +} + +static long cpia_read(struct video_device *dev, char *buf, + unsigned long count, int noblock) +{ + struct cam_data *cam = dev->priv; + + /* make this _really_ smp and multithredi-safe */ + if (down_interruptible(&cam->busy_lock)) + return -EINTR; + + if (!buf) { + DBG("buf NULL\n"); + up(&cam->busy_lock); + return -EINVAL; + } + + if (!count) { + DBG("count 0\n"); + up(&cam->busy_lock); + return 0; + } + + if (!cam->ops) { + DBG("ops NULL\n"); + up(&cam->busy_lock); + return -ENODEV; + } + + /* upload frame */ + cam->decompressed_frame.state = FRAME_READY; + cam->mmap_kludge=0; + fetch_frame(cam); + if (cam->decompressed_frame.state != FRAME_DONE) { + DBG("upload failed %d/%d\n", cam->decompressed_frame.count, + cam->decompressed_frame.state); + up(&cam->busy_lock); + return -EIO; + } + cam->decompressed_frame.state = FRAME_UNUSED; + + /* copy data to user space */ + if (cam->decompressed_frame.count > count) { + DBG("count wrong: %d, %lu\n", cam->decompressed_frame.count, + count); + up(&cam->busy_lock); + return -EFAULT; + } + if (copy_to_user(buf, cam->decompressed_frame.data, + cam->decompressed_frame.count)) { + DBG("copy_to_user failed\n"); + up(&cam->busy_lock); + return -EFAULT; + } + + up(&cam->busy_lock); + return cam->decompressed_frame.count; +} + +static int cpia_ioctl(struct video_device *dev, unsigned int ioctlnr, void *arg) +{ + struct cam_data *cam = dev->priv; + int retval = 0; + + if (!cam || !cam->ops) + return -ENODEV; + + /* make this _really_ smp-safe */ + if (down_interruptible(&cam->busy_lock)) + return -EINTR; + + //DBG("cpia_ioctl: %u\n", ioctlnr); + + switch (ioctlnr) { + /* query capabilites */ + case VIDIOCGCAP: + { + struct video_capability b; + + DBG("VIDIOCGCAP\n"); + strcpy(b.name, "CPiA Camera"); + b.type = VID_TYPE_CAPTURE; + b.channels = 1; + b.audios = 0; + b.maxwidth = 352; /* VIDEOSIZE_CIF */ + b.maxheight = 288; + b.minwidth = 48; /* VIDEOSIZE_48_48 */ + b.minheight = 48; + + if (copy_to_user(arg, &b, sizeof(b))) + retval = -EFAULT; + + break; + } + + /* get/set video source - we are a camera and nothing else */ + case VIDIOCGCHAN: + { + struct video_channel v; + + DBG("VIDIOCGCHAN\n"); + if (copy_from_user(&v, arg, sizeof(v))) { + retval = -EFAULT; + break; + } + if (v.channel != 0) { + retval = -EINVAL; + break; + } + + v.channel = 0; + strcpy(v.name, "Camera"); + v.tuners = 0; + v.flags = 0; + v.type = VIDEO_TYPE_CAMERA; + v.norm = 0; + + if (copy_to_user(arg, &v, sizeof(v))) + retval = -EFAULT; + break; + } + + case VIDIOCSCHAN: + { + int v; + + DBG("VIDIOCSCHAN\n"); + if (copy_from_user(&v, arg, sizeof(v))) + retval = -EFAULT; + + if (retval == 0 && v != 0) + retval = -EINVAL; + + break; + } + + /* image properties */ + case VIDIOCGPICT: + DBG("VIDIOCGPICT\n"); + if (copy_to_user(arg, &cam->vp, sizeof(struct video_picture))) + retval = -EFAULT; + break; + + case VIDIOCSPICT: + { + struct video_picture vp; + + DBG("VIDIOCSPICT\n"); + + /* copy_from_user */ + if (copy_from_user(&vp, arg, sizeof(vp))) { + retval = -EFAULT; + break; + } + + /* check validity */ + DBG("palette: %d\n", vp.palette); + DBG("depth: %d\n", vp.depth); + if (!valid_mode(vp.palette, vp.depth)) { + retval = -EINVAL; + break; + } + + down(&cam->param_lock); + /* brightness, colour, contrast need no check 0-65535 */ + memcpy( &cam->vp, &vp, sizeof(vp) ); + /* update cam->params.colourParams */ + cam->params.colourParams.brightness = vp.brightness*100/65535; + cam->params.colourParams.contrast = vp.contrast*100/65535; + cam->params.colourParams.saturation = vp.colour*100/65535; + /* contrast is in steps of 8, so round */ + cam->params.colourParams.contrast = + ((cam->params.colourParams.contrast + 3) / 8) * 8; + if (cam->params.version.firmwareVersion == 1 && + cam->params.version.firmwareRevision == 2 && + cam->params.colourParams.contrast > 80) { + /* 1-02 firmware limits contrast to 80 */ + cam->params.colourParams.contrast = 80; + } + + /* queue command to update camera */ + cam->cmd_queue |= COMMAND_SETCOLOURPARAMS; + up(&cam->param_lock); + DBG("VIDIOCSPICT: %d / %d // %d / %d / %d / %d\n", + vp.depth, vp.palette, vp.brightness, vp.hue, vp.colour, + vp.contrast); + break; + } + + /* get/set capture window */ + case VIDIOCGWIN: + DBG("VIDIOCGWIN\n"); + + if (copy_to_user(arg, &cam->vw, sizeof(struct video_window))) + retval = -EFAULT; + break; + + case VIDIOCSWIN: + { + /* copy_from_user, check validity, copy to internal structure */ + struct video_window vw; + DBG("VIDIOCSWIN\n"); + if (copy_from_user(&vw, arg, sizeof(vw))) { + retval = -EFAULT; + break; + } + + if (vw.clipcount != 0) { /* clipping not supported */ + retval = -EINVAL; + break; + } + if (vw.clips != NULL) { /* clipping not supported */ + retval = -EINVAL; + break; + } + + /* we set the video window to something smaller or equal to what + * is requested by the user??? + */ + down(&cam->param_lock); + if (vw.width != cam->vw.width || vw.height != cam->vw.height) { + int video_size = match_videosize(vw.width, vw.height); + + if (video_size < 0) { + retval = -EINVAL; + up(&cam->param_lock); + break; + } + cam->video_size = video_size; + set_vw_size(cam); + DBG("%d / %d\n", cam->vw.width, cam->vw.height); + cam->cmd_queue |= COMMAND_SETFORMAT; + } + + // FIXME needed??? memcpy(&cam->vw, &vw, sizeof(vw)); + up(&cam->param_lock); + + /* setformat ignored by camera during streaming, + * so stop/dispatch/start */ + if (cam->cmd_queue & COMMAND_SETFORMAT) { + DBG("\n"); + dispatch_commands(cam); + } + DBG("%d/%d:%d\n", cam->video_size, + cam->vw.width, cam->vw.height); + break; + } + + /* mmap interface */ + case VIDIOCGMBUF: + { + struct video_mbuf vm; + int i; + + DBG("VIDIOCGMBUF\n"); + memset(&vm, 0, sizeof(vm)); + vm.size = CPIA_MAX_FRAME_SIZE*FRAME_NUM; + vm.frames = FRAME_NUM; + for (i = 0; i < FRAME_NUM; i++) + vm.offsets[i] = CPIA_MAX_FRAME_SIZE * i; + + if (copy_to_user((void *)arg, (void *)&vm, sizeof(vm))) + retval = -EFAULT; + + break; + } + + case VIDIOCMCAPTURE: + { + struct video_mmap vm; + int video_size; + + if (copy_from_user((void *)&vm, (void *)arg, sizeof(vm))) { + retval = -EFAULT; + break; + } +#if 1 + DBG("VIDIOCMCAPTURE: %d / %d / %dx%d\n", vm.format, vm.frame, + vm.width, vm.height); +#endif + if (vm.frame<0||vm.frame>=FRAME_NUM) { + retval = -EINVAL; + break; + } + + /* set video format */ + cam->vp.palette = vm.format; + switch(vm.format) { + case VIDEO_PALETTE_GREY: + case VIDEO_PALETTE_RGB555: + case VIDEO_PALETTE_RGB565: + case VIDEO_PALETTE_YUV422: + case VIDEO_PALETTE_YUYV: + case VIDEO_PALETTE_UYVY: + cam->vp.depth = 16; + break; + case VIDEO_PALETTE_RGB24: + cam->vp.depth = 24; + break; + case VIDEO_PALETTE_RGB32: + cam->vp.depth = 32; + break; + default: + retval = -EINVAL; + break; + } + if (retval) + break; + + /* set video size */ + video_size = match_videosize(vm.width, vm.height); + if (cam->video_size < 0) { + retval = -EINVAL; + break; + } + if (video_size != cam->video_size) { + cam->video_size = video_size; + set_vw_size(cam); + cam->cmd_queue |= COMMAND_SETFORMAT; + dispatch_commands(cam); + } +#if 0 + DBG("VIDIOCMCAPTURE: %d / %d/%d\n", cam->video_size, + cam->vw.width, cam->vw.height); +#endif + /* according to v4l-spec we must start streaming here */ + cam->mmap_kludge = 1; + retval = capture_frame(cam, &vm); + + break; + } + + case VIDIOCSYNC: + { + int frame; + + if (copy_from_user((void *)&frame, arg, sizeof(int))) { + retval = -EFAULT; + break; + } + //DBG("VIDIOCSYNC: %d\n", frame); + + if (frame<0 || frame >= FRAME_NUM) { + retval = -EINVAL; + break; + } + + switch (cam->frame[frame].state) { + case FRAME_UNUSED: + case FRAME_READY: + case FRAME_GRABBING: + DBG("sync to unused frame %d\n", frame); + retval = -EINVAL; + break; + + case FRAME_DONE: + cam->frame[frame].state = FRAME_UNUSED; + //DBG("VIDIOCSYNC: %d synced\n", frame); + break; + } + if (retval == -EINTR) { + /* FIXME - xawtv does not handle this nice */ + retval = 0; + } + break; + } + + /* pointless to implement overlay with this camera */ + case VIDIOCCAPTURE: + retval = -EINVAL; + break; + case VIDIOCGFBUF: + retval = -EINVAL; + break; + case VIDIOCSFBUF: + retval = -EINVAL; + break; + case VIDIOCKEY: + retval = -EINVAL; + break; + + /* tuner interface - we have none */ + case VIDIOCGTUNER: + retval = -EINVAL; + break; + case VIDIOCSTUNER: + retval = -EINVAL; + break; + case VIDIOCGFREQ: + retval = -EINVAL; + break; + case VIDIOCSFREQ: + retval = -EINVAL; + break; + + /* audio interface - we have none */ + case VIDIOCGAUDIO: + retval = -EINVAL; + break; + case VIDIOCSAUDIO: + retval = -EINVAL; + break; + default: + retval = -ENOIOCTLCMD; + break; + } + + up(&cam->param_lock); + up(&cam->busy_lock); + return retval; +} + +/* FIXME */ +static int cpia_mmap(struct video_device *dev, const char *adr, + unsigned long size) +{ + struct cam_data *cam = dev->priv; + int retval; + + if (!cam || !cam->ops) + return -ENODEV; + + DBG("cpia_mmap: %ld\n", size); + + if (size > FRAME_NUM*CPIA_MAX_FRAME_SIZE) + return -EINVAL; + + if (!cam || !cam->ops) + return -ENODEV; + + /* make this _really_ smp-safe */ + if (down_interruptible(&cam->busy_lock)) + return -EINTR; + + if (!cam->frame_buf) { /* we do lazy allocation */ + if ((retval = allocate_frame_buf(cam))) { + up(&cam->busy_lock); + return retval; + } + } + + pos = (unsigned long)(cam->frame_buf); + while (size > 0) { + page = kvirt_to_pa(pos); + if (remap_page_range(start, page, PAGE_SIZE, PAGE_SHARED)) { + up(&cam->busy_lock); + return -EAGAIN; + } + start += PAGE_SIZE; + pos += PAGE_SIZE; + if (size > PAGE_SIZE) + size -= PAGE_SIZE; + else + size = 0; + } + + DBG("cpia_mmap: %ld\n", size); + up(&cam->busy_lock); + + return 0; +} + +int cpia_video_init(struct video_device *vdev) +{ +#ifdef CONFIG_PROC_FS + create_proc_cpia_cam(vdev->priv); +#endif + return 0; +} + +static struct video_device cpia_template = { + owner: THIS_MODULE, + name: "CPiA Camera", + type: VID_TYPE_CAPTURE, + hardware: VID_HARDWARE_CPIA, /* FIXME */ + open: cpia_open, + close: cpia_close, + read: cpia_read, + ioctl: cpia_ioctl, + mmap: cpia_mmap, + initialize: cpia_video_init, + minor: -1, +}; + +/* initialise cam_data structure */ +static void reset_camera_struct(struct cam_data *cam) +{ + /* The following parameter values are the defaults from + * "Software Developer's Guide for CPiA Cameras". Any changes + * to the defaults are noted in comments. */ + cam->params.colourParams.brightness = 50; + cam->params.colourParams.contrast = 48; + cam->params.colourParams.saturation = 50; + cam->params.exposure.gainMode = 2; + cam->params.exposure.expMode = 2; /* AEC */ + cam->params.exposure.compMode = 1; + cam->params.exposure.centreWeight = 1; + cam->params.exposure.gain = 0; + cam->params.exposure.fineExp = 0; + cam->params.exposure.coarseExpLo = 185; + cam->params.exposure.coarseExpHi = 0; + cam->params.exposure.redComp = 220; + cam->params.exposure.green1Comp = 214; + cam->params.exposure.green2Comp = 214; + cam->params.exposure.blueComp = 230; + cam->params.colourBalance.balanceModeIsAuto = 1; + cam->params.colourBalance.redGain = 32; + cam->params.colourBalance.greenGain = 6; + cam->params.colourBalance.blueGain = 92; + cam->params.apcor.gain1 = 0x1c; + cam->params.apcor.gain2 = 0x1a; + cam->params.apcor.gain4 = 0x2d; + cam->params.apcor.gain8 = 0x2a; + cam->params.flickerControl.flickerMode = 0; + cam->params.flickerControl.coarseJump = + flicker_jumps[cam->mainsFreq] + [cam->params.sensorFps.baserate] + [cam->params.sensorFps.divisor]; + cam->params.vlOffset.gain1 = 24; + cam->params.vlOffset.gain2 = 28; + cam->params.vlOffset.gain4 = 30; + cam->params.vlOffset.gain8 = 30; + cam->params.compressionParams.hysteresis = 3; + cam->params.compressionParams.threshMax = 11; + cam->params.compressionParams.smallStep = 1; + cam->params.compressionParams.largeStep = 3; + cam->params.compressionParams.decimationHysteresis = 2; + cam->params.compressionParams.frDiffStepThresh = 5; + cam->params.compressionParams.qDiffStepThresh = 3; + cam->params.compressionParams.decimationThreshMod = 2; + /* End of default values from Software Developer's Guide */ + + cam->transfer_rate = 0; + + /* Set Sensor FPS to 15fps. This seems better than 30fps + * for indoor lighting. */ + cam->params.sensorFps.divisor = 1; + cam->params.sensorFps.baserate = 1; + + cam->params.yuvThreshold.yThreshold = 15; /* FIXME? */ + cam->params.yuvThreshold.uvThreshold = 15; /* FIXME? */ + + cam->params.format.subSample = SUBSAMPLE_422; + cam->params.format.yuvOrder = YUVORDER_YUYV; + + cam->params.compression.mode = CPIA_COMPRESSION_AUTO; + cam->params.compressionTarget.frTargeting = + CPIA_COMPRESSION_TARGET_QUALITY; + cam->params.compressionTarget.targetFR = 7; /* FIXME? */ + cam->params.compressionTarget.targetQ = 10; /* FIXME? */ + + cam->video_size = VIDEOSIZE_CIF; + + cam->vp.colour = 32768; /* 50% */ + cam->vp.hue = 32768; /* 50% */ + cam->vp.brightness = 32768; /* 50% */ + cam->vp.contrast = 32768; /* 50% */ + cam->vp.whiteness = 0; /* not used -> grayscale only */ + cam->vp.depth = 0; /* FIXME: to be set by user? */ + cam->vp.palette = VIDEO_PALETTE_RGB24; /* FIXME: to be set by user? */ + + cam->vw.x = 0; + cam->vw.y = 0; + set_vw_size(cam); + cam->vw.chromakey = 0; + /* PP NOTE: my extension to use vw.flags for this, bear it! */ + cam->vw.flags = 0; + cam->vw.clipcount = 0; + cam->vw.clips = NULL; + + cam->cmd_queue = COMMAND_NONE; + cam->first_frame = 0; + + return; +} + +/* initialize cam_data structure */ +static void init_camera_struct(struct cam_data *cam, + struct cpia_camera_ops *ops ) +{ + int i; + + /* Default everything to 0 */ + memset(cam, 0, sizeof(struct cam_data)); + + cam->ops = ops; + init_MUTEX(&cam->param_lock); + init_MUTEX(&cam->busy_lock); + + reset_camera_struct(cam); + + cam->proc_entry = NULL; + + memcpy(&cam->vdev, &cpia_template, sizeof(cpia_template)); + cam->vdev.priv = cam; + + cam->curframe = 0; + for (i = 0; i < FRAME_NUM; i++) { + cam->frame[i].width = 0; + cam->frame[i].height = 0; + cam->frame[i].state = FRAME_UNUSED; + cam->frame[i].data = NULL; + } + cam->decompressed_frame.width = 0; + cam->decompressed_frame.height = 0; + cam->decompressed_frame.state = FRAME_UNUSED; + cam->decompressed_frame.data = NULL; +} + +struct cam_data *cpia_register_camera(struct cpia_camera_ops *ops, void *lowlevel) +{ + struct cam_data *camera; + + /* Need a lock when adding/removing cameras. This doesn't happen + * often and doesn't take very long, so grabbing the kernel lock + * should be OK. */ + + if ((camera = kmalloc(sizeof(struct cam_data), GFP_KERNEL)) == NULL) { + unlock_kernel(); + return NULL; + } + + init_camera_struct( camera, ops ); + camera->lowlevel_data = lowlevel; + + /* register v4l device */ + if (video_register_device(&camera->vdev, VFL_TYPE_GRABBER, video_nr) == -1) { + kfree(camera); + unlock_kernel(); + printk(KERN_DEBUG "video_register_device failed\n"); + return NULL; + } + + /* get version information from camera: open/reset/close */ + + /* open cpia */ + if (camera->ops->open(camera->lowlevel_data)) + return camera; + + /* reset the camera */ + if (reset_camera(camera) != 0) { + camera->ops->close(camera->lowlevel_data); + return camera; + } + + /* close cpia */ + camera->ops->close(camera->lowlevel_data); + +/* Eh? Feeling happy? - jerdfelt */ +/* + camera->ops->open(camera->lowlevel_data); + camera->ops->close(camera->lowlevel_data); +*/ + + printk(KERN_INFO " CPiA Version: %d.%02d (%d.%d)\n", + camera->params.version.firmwareVersion, + camera->params.version.firmwareRevision, + camera->params.version.vcVersion, + camera->params.version.vcRevision); + printk(KERN_INFO " CPiA PnP-ID: %04x:%04x:%04x\n", + camera->params.pnpID.vendor, + camera->params.pnpID.product, + camera->params.pnpID.deviceRevision); + printk(KERN_INFO " VP-Version: %d.%d %04x\n", + camera->params.vpVersion.vpVersion, + camera->params.vpVersion.vpRevision, + camera->params.vpVersion.cameraHeadID); + + return camera; +} + +void cpia_unregister_camera(struct cam_data *cam) +{ + if (!cam->open_count) { + DBG("unregistering video\n"); + video_unregister_device(&cam->vdev); + } else { + LOG("/dev/video%d removed while open, " + "deferring video_unregister_device\n", cam->vdev.minor); + DBG("camera open -- setting ops to NULL\n"); + cam->ops = NULL; + } + +#ifdef CONFIG_PROC_FS + DBG("destroying /proc/cpia/video%d\n", cam->vdev.minor); + destroy_proc_cpia_cam(cam); +#endif + if (!cam->open_count) { + DBG("freeing camera\n"); + kfree(cam); + } +} + +/**************************************************************************** + * + * Module routines + * + ***************************************************************************/ + +#ifdef MODULE +int init_module(void) +{ + printk(KERN_INFO "%s v%d.%d.%d\n", ABOUT, + CPIA_MAJ_VER, CPIA_MIN_VER, CPIA_PATCH_VER); +#ifdef CONFIG_PROC_FS + proc_cpia_create(); +#endif +#ifdef CONFIG_KMOD +#ifdef CONFIG_VIDEO_CPIA_PP_MODULE + request_module("cpia_pp"); +#endif +#ifdef CONFIG_VIDEO_CPIA_USB_MODULE + request_module("cpia_usb"); +#endif +#endif +return 0; +} + +void cleanup_module(void) +{ +#ifdef CONFIG_PROC_FS + proc_cpia_destroy(); +#endif +} + +#else + +int cpia_init(struct video_init *unused) +{ + printk(KERN_INFO "%s v%d.%d.%d\n", ABOUT, + CPIA_MAJ_VER, CPIA_MIN_VER, CPIA_PATCH_VER); +#ifdef CONFIG_PROC_FS + proc_cpia_create(); +#endif + +#ifdef CONFIG_VIDEO_CPIA_PP + cpia_pp_init(); +#endif +#ifdef CONFIG_KMOD +#ifdef CONFIG_VIDEO_CPIA_PP_MODULE + request_module("cpia_pp"); +#endif + +#ifdef CONFIG_VIDEO_CPIA_USB_MODULE + request_module("cpia_usb"); +#endif +#endif /* CONFIG_KMOD */ +#ifdef CONFIG_VIDEO_CPIA_USB + cpia_usb_init(); +#endif + return 0; +} + +/* Exported symbols for modules. */ + +EXPORT_SYMBOL(cpia_register_camera); +EXPORT_SYMBOL(cpia_unregister_camera); + +#endif diff -urpN linux-2.4.9-linus/drivers/media/video/cpia.h linux-2.4.9-larpage/drivers/media/video/cpia.h --- linux-2.4.9-linus/drivers/media/video/cpia.h 2001-03-02 11:12:10.000000000 -0800 +++ linux-2.4.9-larpage/drivers/media/video/cpia.h 2002-11-20 02:02:47.000000000 -0800 @@ -35,7 +35,7 @@ #define CPIA_PP_PATCH_VER 4 #define CPIA_MAX_FRAME_SIZE_UNALIGNED (352 * 288 * 4) /* CIF at RGB32 */ -#define CPIA_MAX_FRAME_SIZE ((CPIA_MAX_FRAME_SIZE_UNALIGNED + PAGE_SIZE - 1) & ~(PAGE_SIZE - 1)) /* align above to PAGE_SIZE */ +#define CPIA_MAX_FRAME_SIZE ((CPIA_MAX_FRAME_SIZE_UNALIGNED + MMUPAGE_SIZE - 1) & ~(MMUPAGE_SIZE - 1)) /* align above to MMUPAGE_SIZE */ #ifdef __KERNEL__ diff -urpN linux-2.4.9-linus/drivers/media/video/meye.c linux-2.4.9-larpage/drivers/media/video/meye.c --- linux-2.4.9-linus/drivers/media/video/meye.c 2001-07-28 12:35:55.000000000 -0700 +++ linux-2.4.9-larpage/drivers/media/video/meye.c 2002-11-20 02:02:48.000000000 -0800 @@ -110,103 +110,67 @@ static inline int meye_emptyq(struct mey return result; } -/****************************************************************************/ -/* Memory allocation routines (stolen from bttv-driver.c) */ -/****************************************************************************/ - -#define MDEBUG(x) do {} while (0) -/* #define MDEBUG(x) x */ +/**********************************************************/ +/* Memory management functions, copied from bttv-driver.c */ +/**********************************************************/ -/* Given PGD from the address space's page table, return the kernel - * virtual mapping of the physical memory mapped at ADR. - */ -static inline unsigned long uvirt_to_kva(pgd_t *pgd, unsigned long adr) { - unsigned long ret = 0UL; - pmd_t *pmd; - pte_t *ptep, pte; - - if (!pgd_none(*pgd)) { - pmd = pmd_offset(pgd, adr); - if (!pmd_none(*pmd)) { - ptep = pte_offset(pmd, adr); - pte = *ptep; - if(pte_present(pte)) { - ret = (unsigned long)page_address(pte_page(pte)); - ret |= (adr & (PAGE_SIZE - 1)); - - } - } - } - MDEBUG(printk("uv2kva(%lx-->%lx)\n", adr, ret)); - return ret; -} - -static inline unsigned long uvirt_to_bus(unsigned long adr) { - unsigned long kva, ret; - - kva = uvirt_to_kva(pgd_offset(current->mm, adr), adr); - ret = virt_to_bus((void *)kva); - MDEBUG(printk("uv2b(%lx-->%lx)\n", adr, ret)); - return ret; -} - -static inline unsigned long kvirt_to_bus(unsigned long adr) { - unsigned long va, kva, ret; - - va = VMALLOC_VMADDR(adr); - kva = uvirt_to_kva(pgd_offset_k(va), va); - ret = virt_to_bus((void *)kva); - MDEBUG(printk("kv2b(%lx-->%lx)\n", adr, ret)); - return ret; -} - -/* Here we want the physical address of the memory. - * This is used when initializing the contents of the - * area and marking the pages as reserved. - */ -static inline unsigned long kvirt_to_pa(unsigned long adr) { - unsigned long va, kva, ret; - - va = VMALLOC_VMADDR(adr); - kva = uvirt_to_kva(pgd_offset_k(va), va); - ret = __pa(kva); - MDEBUG(printk("kv2pa(%lx-->%lx)\n", adr, ret)); - return ret; -} - -static void *rvmalloc(signed long size) { +static void *rvmalloc(unsigned signed long size) { void *mem; - unsigned long adr, page; mem = vmalloc_32(size); if (mem) { - memset(mem, 0, size); /* Clear the ram out, no junk to the user */ - adr = (unsigned long)mem; - while (size > 0) { - page = kvirt_to_pa(adr); - mem_map_reserve(virt_to_page(__va(page))); - adr += PAGE_SIZE; - size -= PAGE_SIZE; - } + /* no junk to the user */ + memset(mem, 0, PAGE_ALIGN(size)); + /* no need to reserve until rvmap_page_range */ } return mem; } -static void rvfree(void * mem, signed long size) { - unsigned long adr, page; - +static void rvfree(void *mem, unsigned long size) +{ + unsigned long vadr; if (mem) { - adr = (unsigned long) mem; - while (size > 0) { - page = kvirt_to_pa(adr); - mem_map_unreserve(virt_to_page(__va(page))); - adr += PAGE_SIZE; + vadr = (unsigned long) mem; + while ((long) size > 0) { + ClearPageReserved(vvirt_to_page(vadr)); + vadr += PAGE_SIZE; size -= PAGE_SIZE; } vfree(mem); } } +static inline int rvmap_page_range(const char *uadr, void *mem, + unsigned long size, pgprot_t prot) +{ + struct page *page; + unsigned long padr; + unsigned long unit = PAGE_SIZE; + + while ((long)size > 0) { + if (unit > size) + unit = size; + page = vvirt_to_page((unsigned long)mem); + SetPageReserved(page); + padr = __pa(page_address(page)); + if (remap_page_range((unsigned long)uadr, padr, unit, prot)) + return -EAGAIN; + uadr += PAGE_SIZE; + mem += PAGE_SIZE; + size -= PAGE_SIZE; + } + return 0; +} + +static inline unsigned long kvirt_to_bus(unsigned long vadr) +{ + unsigned long kadr; + + kadr = (unsigned long)page_address(vvirt_to_page(vadr)) + + (vadr & ~PAGE_MASK); + return virt_to_bus((void *) kadr); +} + /* return a page table pointing to N pages of locked memory */ static void *ptable_alloc(int npages, u32 *pt_addr) { int i; @@ -214,23 +178,23 @@ static void *ptable_alloc(int npages, u3 u32 *ptable; unsigned long adr; - vmem = rvmalloc((npages + 1) * PAGE_SIZE); + vmem = rvmalloc((npages + 1) * MMUPAGE_SIZE); if (!vmem) return NULL; adr = (unsigned long)vmem; - ptable = (u32 *)(vmem + npages * PAGE_SIZE); + ptable = (u32 *)(vmem + npages * MMUPAGE_SIZE); for (i = 0; i < npages; i++) { - ptable[i] = (u32) kvirt_to_bus(adr); - adr += PAGE_SIZE; + ptable[i] = kvirt_to_bus(adr); + adr += MMUPAGE_SIZE; } - *pt_addr = (u32) kvirt_to_bus(adr); + *pt_addr = kvirt_to_bus(adr); return vmem; } static void ptable_free(void *vmem, int npages) { - rvfree(vmem, (npages + 1) * PAGE_SIZE); + rvfree(vmem, (npages + 1) * MMUPAGE_SIZE); } /****************************************************************************/ @@ -667,14 +631,14 @@ static void mchip_cont_read_frame(u32 v pt_id = (v >> 17) & 0x3FF; avail = MCHIP_NB_PAGES - pt_id; - if (size > avail*PAGE_SIZE) { - memcpy(buf, meye.mchip_fbuffer + pt_id * PAGE_SIZE, - avail * PAGE_SIZE); - memcpy(buf +avail * PAGE_SIZE, meye.mchip_fbuffer, - size - avail * PAGE_SIZE); + if (size > avail*MMUPAGE_SIZE) { + memcpy(buf, meye.mchip_fbuffer + pt_id * MMUPAGE_SIZE, + avail * MMUPAGE_SIZE); + memcpy(buf +avail * MMUPAGE_SIZE, meye.mchip_fbuffer, + size - avail * MMUPAGE_SIZE); } else - memcpy(buf, meye.mchip_fbuffer + pt_id * PAGE_SIZE, size); + memcpy(buf, meye.mchip_fbuffer + pt_id * MMUPAGE_SIZE, size); } /* read a compressed frame from the framebuffer */ @@ -688,26 +652,26 @@ static int mchip_comp_read_frame(u32 v, trailer = (v >> 1) & 0x3FF; if (pt_end < pt_start) { - fsize = (MCHIP_NB_PAGES_MJPEG - pt_start) * PAGE_SIZE; - fsize2 = pt_end * PAGE_SIZE + trailer * 4; + fsize = (MCHIP_NB_PAGES_MJPEG - pt_start) * MMUPAGE_SIZE; + fsize2 = pt_end * MMUPAGE_SIZE + trailer * 4; if (fsize + fsize2 > size) { printk(KERN_WARNING "meye: oversized compressed frame %d %d\n", fsize, fsize2); return -1; } else { - memcpy(buf, meye.mchip_fbuffer + pt_start * PAGE_SIZE, + memcpy(buf, meye.mchip_fbuffer + pt_start * MMUPAGE_SIZE, fsize); memcpy(buf + fsize, meye.mchip_fbuffer, fsize2); fsize += fsize2; } } else { - fsize = (pt_end - pt_start) * PAGE_SIZE + trailer * 4; + fsize = (pt_end - pt_start) * MMUPAGE_SIZE + trailer * 4; if (fsize > size) { printk(KERN_WARNING "meye: oversized compressed frame %d\n", fsize); return -1; } else - memcpy(buf, meye.mchip_fbuffer + pt_start * PAGE_SIZE, + memcpy(buf, meye.mchip_fbuffer + pt_start * MMUPAGE_SIZE, fsize); } @@ -1241,8 +1205,7 @@ static int meye_ioctl(struct video_devic static int meye_mmap(struct video_device *dev, const char *adr, unsigned long size) { - unsigned long start=(unsigned long) adr; - unsigned long page,pos; + int retval; down(&meye.lock); if (size > gbuffers * gbufsize) { @@ -1258,20 +1221,9 @@ static int meye_mmap(struct video_device return -ENOMEM; } } - pos = (unsigned long)meye.grab_fbuffer; - - while (size > 0) { - page = kvirt_to_pa(pos); - if (remap_page_range(start, page, PAGE_SIZE, PAGE_SHARED)) { - up(&meye.lock); - return -EAGAIN; - } - start += PAGE_SIZE; - pos += PAGE_SIZE; - size -= PAGE_SIZE; - } + retval = rvmap_page_range(adr, meye.grab_fbuffer, size, PAGE_SHARED); up(&meye.lock); - return 0; + return retval; } static struct video_device meye_template = { diff -urpN linux-2.4.9-linus/drivers/media/video/meye.c.orig linux-2.4.9-larpage/drivers/media/video/meye.c.orig --- linux-2.4.9-linus/drivers/media/video/meye.c.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/drivers/media/video/meye.c.orig 2002-11-20 02:02:48.000000000 -0800 @@ -0,0 +1,1456 @@ +/* + * Motion Eye video4linux driver for Sony Vaio PictureBook + * + * Copyright (C) 2001 Stelian Pop , Alcôve + * + * Copyright (C) 2000 Andrew Tridgell + * + * Earlier work by Werner Almesberger, Paul `Rusty' Russell and Paul Mackerras. + * + * Some parts borrowed from various video4linux drivers, especially + * bttv-driver.c and zoran.c, see original files for credits. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "meye.h" +#include "linux/meye.h" + +/* driver structure - only one possible */ +static struct meye meye; +/* number of grab buffers */ +static unsigned int gbuffers = 2; +/* size of a grab buffer */ +static unsigned int gbufsize = MEYE_MAX_BUFSIZE; +/* /dev/videoX registration number */ +static int video_nr = -1; + +/****************************************************************************/ +/* Queue routines */ +/****************************************************************************/ + +/* Inits the queue */ +static inline void meye_initq(struct meye_queue *queue) { + queue->head = queue->tail = 0; + queue->len = 0; + queue->s_lock = (spinlock_t)SPIN_LOCK_UNLOCKED; + init_waitqueue_head(&queue->proc_list); +} + +/* Pulls an element from the queue */ +static inline int meye_pullq(struct meye_queue *queue) { + int result; + unsigned long flags; + + spin_lock_irqsave(&queue->s_lock, flags); + if (!queue->len) { + spin_unlock_irqrestore(&queue->s_lock, flags); + return -1; + } + result = queue->buf[queue->head]; + queue->head++; + queue->head &= (MEYE_QUEUE_SIZE - 1); + queue->len--; + spin_unlock_irqrestore(&queue->s_lock, flags); + return result; +} + +/* Pushes an element into the queue */ +static inline void meye_pushq(struct meye_queue *queue, int element) { + unsigned long flags; + + spin_lock_irqsave(&queue->s_lock, flags); + if (queue->len == MEYE_QUEUE_SIZE) { + /* remove the first element */ + queue->head++; + queue->head &= (MEYE_QUEUE_SIZE - 1); + queue->len--; + } + queue->buf[queue->tail] = element; + queue->tail++; + queue->tail &= (MEYE_QUEUE_SIZE - 1); + queue->len++; + + spin_unlock_irqrestore(&queue->s_lock, flags); +} + +/* Tests if the queue is empty */ +static inline int meye_emptyq(struct meye_queue *queue, int *elem) { + int result; + unsigned long flags; + + spin_lock_irqsave(&queue->s_lock, flags); + result = (queue->len == 0); + if (!result && elem) + *elem = queue->buf[queue->head]; + spin_unlock_irqrestore(&queue->s_lock, flags); + return result; +} + +/**********************************************************/ +/* Memory management functions, copied from bttv-driver.c */ +/**********************************************************/ + +static void *rvmalloc(unsigned signed long size) { + void *mem; + + mem = vmalloc_32(size); + if (mem) { + /* no junk to the user */ + memset(mem, 0, PAGE_ALIGN(size)); + /* no need to reserve until rvmap_page_range */ + } + return mem; +} + +static void rvfree(void *mem, unsigned long size) +{ + unsigned long vadr; + if (mem) { + vadr = (unsigned long) mem; + while ((long) size > 0) { + ClearPageReserved(vvirt_to_page(vadr)); + vadr += PAGE_SIZE; + size -= PAGE_SIZE; + } + vfree(mem); + } +} + +static inline int rvmap_page_range(const char *uadr, void *mem, + unsigned long size, pgprot_t prot) +{ + struct page *page; + unsigned long padr; + unsigned long unit = PAGE_SIZE; + + while ((long)size > 0) { + if (unit > size) + unit = size; + page = vvirt_to_page((unsigned long)mem); + SetPageReserved(page); + padr = __pa(page_address(page)); + if (remap_page_range((unsigned long)uadr, padr, unit, prot)) + return -EAGAIN; + uadr += PAGE_SIZE; + mem += PAGE_SIZE; + size -= PAGE_SIZE; + } + return 0; +} + +static inline unsigned long kvirt_to_bus(unsigned long vadr) +{ + unsigned long kadr; + + kadr = (unsigned long)page_address(vvirt_to_page(vadr)) + + (vadr & ~PAGE_MASK); + return virt_to_bus((void *) kadr); +} + +/* return a page table pointing to N pages of locked memory */ +static void *ptable_alloc(int npages, u32 *pt_addr) { + int i; + void *vmem; + u32 *ptable; + unsigned long adr; + + vmem = rvmalloc((npages + 1) * MMUPAGE_SIZE); + if (!vmem) + return NULL; + + adr = (unsigned long)vmem; + ptable = (u32 *)(vmem + npages * MMUPAGE_SIZE); + for (i = 0; i < npages; i++) { + ptable[i] = kvirt_to_bus(adr); + adr += MMUPAGE_SIZE; + } + + *pt_addr = kvirt_to_bus(adr); + return vmem; +} + +static void ptable_free(void *vmem, int npages) { + rvfree(vmem, (npages + 1) * MMUPAGE_SIZE); +} + +/****************************************************************************/ +/* JPEG tables at different qualities to load into the VRJ chip */ +/****************************************************************************/ + +/* return a set of quantisation tables based on a quality from 1 to 10 */ +static u16 *jpeg_quantisation_tables(int *size, int quality) { + static u16 tables0[] = { + 0xdbff, 0x4300, 0xff00, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, + 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, + 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, + 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, + 0xffff, 0xffff, 0xffff, + 0xdbff, 0x4300, 0xff01, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, + 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, + 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, + 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, + 0xffff, 0xffff, 0xffff, + }; + static u16 tables1[] = { + 0xdbff, 0x4300, 0x5000, 0x3c37, 0x3c46, 0x5032, 0x4146, 0x5a46, + 0x5055, 0x785f, 0x82c8, 0x6e78, 0x786e, 0xaff5, 0x91b9, 0xffc8, + 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, + 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, + 0xffff, 0xffff, 0xffff, + 0xdbff, 0x4300, 0x5501, 0x5a5a, 0x6978, 0xeb78, 0x8282, 0xffeb, + 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, + 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, + 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, + 0xffff, 0xffff, 0xffff, + }; + static u16 tables2[] = { + 0xdbff, 0x4300, 0x2800, 0x1e1c, 0x1e23, 0x2819, 0x2123, 0x2d23, + 0x282b, 0x3c30, 0x4164, 0x373c, 0x3c37, 0x587b, 0x495d, 0x9164, + 0x9980, 0x8f96, 0x8c80, 0xa08a, 0xe6b4, 0xa0c3, 0xdaaa, 0x8aad, + 0xc88c, 0xcbff, 0xeeda, 0xfff5, 0xffff, 0xc19b, 0xffff, 0xfaff, + 0xe6ff, 0xfffd, 0xfff8, + 0xdbff, 0x4300, 0x2b01, 0x2d2d, 0x353c, 0x763c, 0x4141, 0xf876, + 0x8ca5, 0xf8a5, 0xf8f8, 0xf8f8, 0xf8f8, 0xf8f8, 0xf8f8, 0xf8f8, + 0xf8f8, 0xf8f8, 0xf8f8, 0xf8f8, 0xf8f8, 0xf8f8, 0xf8f8, 0xf8f8, + 0xf8f8, 0xf8f8, 0xf8f8, 0xf8f8, 0xf8f8, 0xf8f8, 0xf8f8, 0xf8f8, + 0xf8f8, 0xf8f8, 0xfff8, + }; + static u16 tables3[] = { + 0xdbff, 0x4300, 0x1b00, 0x1412, 0x1417, 0x1b11, 0x1617, 0x1e17, + 0x1b1c, 0x2820, 0x2b42, 0x2528, 0x2825, 0x3a51, 0x303d, 0x6042, + 0x6555, 0x5f64, 0x5d55, 0x6a5b, 0x9978, 0x6a81, 0x9071, 0x5b73, + 0x855d, 0x86b5, 0x9e90, 0xaba3, 0xabad, 0x8067, 0xc9bc, 0xa6ba, + 0x99c7, 0xaba8, 0xffa4, + 0xdbff, 0x4300, 0x1c01, 0x1e1e, 0x2328, 0x4e28, 0x2b2b, 0xa44e, + 0x5d6e, 0xa46e, 0xa4a4, 0xa4a4, 0xa4a4, 0xa4a4, 0xa4a4, 0xa4a4, + 0xa4a4, 0xa4a4, 0xa4a4, 0xa4a4, 0xa4a4, 0xa4a4, 0xa4a4, 0xa4a4, + 0xa4a4, 0xa4a4, 0xa4a4, 0xa4a4, 0xa4a4, 0xa4a4, 0xa4a4, 0xa4a4, + 0xa4a4, 0xa4a4, 0xffa4, + }; + static u16 tables4[] = { + 0xdbff, 0x4300, 0x1400, 0x0f0e, 0x0f12, 0x140d, 0x1012, 0x1712, + 0x1415, 0x1e18, 0x2132, 0x1c1e, 0x1e1c, 0x2c3d, 0x242e, 0x4932, + 0x4c40, 0x474b, 0x4640, 0x5045, 0x735a, 0x5062, 0x6d55, 0x4556, + 0x6446, 0x6588, 0x776d, 0x817b, 0x8182, 0x604e, 0x978d, 0x7d8c, + 0x7396, 0x817e, 0xff7c, + 0xdbff, 0x4300, 0x1501, 0x1717, 0x1a1e, 0x3b1e, 0x2121, 0x7c3b, + 0x4653, 0x7c53, 0x7c7c, 0x7c7c, 0x7c7c, 0x7c7c, 0x7c7c, 0x7c7c, + 0x7c7c, 0x7c7c, 0x7c7c, 0x7c7c, 0x7c7c, 0x7c7c, 0x7c7c, 0x7c7c, + 0x7c7c, 0x7c7c, 0x7c7c, 0x7c7c, 0x7c7c, 0x7c7c, 0x7c7c, 0x7c7c, + 0x7c7c, 0x7c7c, 0xff7c, + }; + static u16 tables5[] = { + 0xdbff, 0x4300, 0x1000, 0x0c0b, 0x0c0e, 0x100a, 0x0d0e, 0x120e, + 0x1011, 0x1813, 0x1a28, 0x1618, 0x1816, 0x2331, 0x1d25, 0x3a28, + 0x3d33, 0x393c, 0x3833, 0x4037, 0x5c48, 0x404e, 0x5744, 0x3745, + 0x5038, 0x516d, 0x5f57, 0x6762, 0x6768, 0x4d3e, 0x7971, 0x6470, + 0x5c78, 0x6765, 0xff63, + 0xdbff, 0x4300, 0x1101, 0x1212, 0x1518, 0x2f18, 0x1a1a, 0x632f, + 0x3842, 0x6342, 0x6363, 0x6363, 0x6363, 0x6363, 0x6363, 0x6363, + 0x6363, 0x6363, 0x6363, 0x6363, 0x6363, 0x6363, 0x6363, 0x6363, + 0x6363, 0x6363, 0x6363, 0x6363, 0x6363, 0x6363, 0x6363, 0x6363, + 0x6363, 0x6363, 0xff63, + }; + static u16 tables6[] = { + 0xdbff, 0x4300, 0x0d00, 0x0a09, 0x0a0b, 0x0d08, 0x0a0b, 0x0e0b, + 0x0d0e, 0x130f, 0x1520, 0x1213, 0x1312, 0x1c27, 0x171e, 0x2e20, + 0x3129, 0x2e30, 0x2d29, 0x332c, 0x4a3a, 0x333e, 0x4636, 0x2c37, + 0x402d, 0x4157, 0x4c46, 0x524e, 0x5253, 0x3e32, 0x615a, 0x505a, + 0x4a60, 0x5251, 0xff4f, + 0xdbff, 0x4300, 0x0e01, 0x0e0e, 0x1113, 0x2613, 0x1515, 0x4f26, + 0x2d35, 0x4f35, 0x4f4f, 0x4f4f, 0x4f4f, 0x4f4f, 0x4f4f, 0x4f4f, + 0x4f4f, 0x4f4f, 0x4f4f, 0x4f4f, 0x4f4f, 0x4f4f, 0x4f4f, 0x4f4f, + 0x4f4f, 0x4f4f, 0x4f4f, 0x4f4f, 0x4f4f, 0x4f4f, 0x4f4f, 0x4f4f, + 0x4f4f, 0x4f4f, 0xff4f, + }; + static u16 tables7[] = { + 0xdbff, 0x4300, 0x0a00, 0x0707, 0x0708, 0x0a06, 0x0808, 0x0b08, + 0x0a0a, 0x0e0b, 0x1018, 0x0d0e, 0x0e0d, 0x151d, 0x1116, 0x2318, + 0x251f, 0x2224, 0x221f, 0x2621, 0x372b, 0x262f, 0x3429, 0x2129, + 0x3022, 0x3141, 0x3934, 0x3e3b, 0x3e3e, 0x2e25, 0x4944, 0x3c43, + 0x3748, 0x3e3d, 0xff3b, + 0xdbff, 0x4300, 0x0a01, 0x0b0b, 0x0d0e, 0x1c0e, 0x1010, 0x3b1c, + 0x2228, 0x3b28, 0x3b3b, 0x3b3b, 0x3b3b, 0x3b3b, 0x3b3b, 0x3b3b, + 0x3b3b, 0x3b3b, 0x3b3b, 0x3b3b, 0x3b3b, 0x3b3b, 0x3b3b, 0x3b3b, + 0x3b3b, 0x3b3b, 0x3b3b, 0x3b3b, 0x3b3b, 0x3b3b, 0x3b3b, 0x3b3b, + 0x3b3b, 0x3b3b, 0xff3b, + }; + static u16 tables8[] = { + 0xdbff, 0x4300, 0x0600, 0x0504, 0x0506, 0x0604, 0x0506, 0x0706, + 0x0607, 0x0a08, 0x0a10, 0x090a, 0x0a09, 0x0e14, 0x0c0f, 0x1710, + 0x1814, 0x1718, 0x1614, 0x1a16, 0x251d, 0x1a1f, 0x231b, 0x161c, + 0x2016, 0x202c, 0x2623, 0x2927, 0x292a, 0x1f19, 0x302d, 0x282d, + 0x2530, 0x2928, 0xff28, + 0xdbff, 0x4300, 0x0701, 0x0707, 0x080a, 0x130a, 0x0a0a, 0x2813, + 0x161a, 0x281a, 0x2828, 0x2828, 0x2828, 0x2828, 0x2828, 0x2828, + 0x2828, 0x2828, 0x2828, 0x2828, 0x2828, 0x2828, 0x2828, 0x2828, + 0x2828, 0x2828, 0x2828, 0x2828, 0x2828, 0x2828, 0x2828, 0x2828, + 0x2828, 0x2828, 0xff28, + }; + static u16 tables9[] = { + 0xdbff, 0x4300, 0x0300, 0x0202, 0x0203, 0x0302, 0x0303, 0x0403, + 0x0303, 0x0504, 0x0508, 0x0405, 0x0504, 0x070a, 0x0607, 0x0c08, + 0x0c0a, 0x0b0c, 0x0b0a, 0x0d0b, 0x120e, 0x0d10, 0x110e, 0x0b0e, + 0x100b, 0x1016, 0x1311, 0x1514, 0x1515, 0x0f0c, 0x1817, 0x1416, + 0x1218, 0x1514, 0xff14, + 0xdbff, 0x4300, 0x0301, 0x0404, 0x0405, 0x0905, 0x0505, 0x1409, + 0x0b0d, 0x140d, 0x1414, 0x1414, 0x1414, 0x1414, 0x1414, 0x1414, + 0x1414, 0x1414, 0x1414, 0x1414, 0x1414, 0x1414, 0x1414, 0x1414, + 0x1414, 0x1414, 0x1414, 0x1414, 0x1414, 0x1414, 0x1414, 0x1414, + 0x1414, 0x1414, 0xff14, + }; + static u16 tables10[] = { + 0xdbff, 0x4300, 0x0100, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, + 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, + 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, + 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, + 0x0101, 0x0101, 0xff01, + 0xdbff, 0x4300, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, + 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, + 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, + 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, + 0x0101, 0x0101, 0xff01, + }; + + switch (quality) { + case 0: + *size = sizeof(tables0); + return tables0; + case 1: + *size = sizeof(tables1); + return tables1; + case 2: + *size = sizeof(tables2); + return tables2; + case 3: + *size = sizeof(tables3); + return tables3; + case 4: + *size = sizeof(tables4); + return tables4; + case 5: + *size = sizeof(tables5); + return tables5; + case 6: + *size = sizeof(tables6); + return tables6; + case 7: + *size = sizeof(tables7); + return tables7; + case 8: + *size = sizeof(tables8); + return tables8; + case 9: + *size = sizeof(tables9); + return tables9; + case 10: + *size = sizeof(tables10); + return tables10; + default: + printk(KERN_WARNING "meye: invalid quality level %d - using 8\n", quality); + *size = sizeof(tables8); + return tables8; + } + return NULL; +} + +/* return a generic set of huffman tables */ +static u16 *jpeg_huffman_tables(int *size) { + static u16 tables[] = { + 0xC4FF, 0xB500, 0x0010, 0x0102, 0x0303, 0x0402, 0x0503, 0x0405, + 0x0004, 0x0100, 0x017D, 0x0302, 0x0400, 0x0511, 0x2112, 0x4131, + 0x1306, 0x6151, 0x2207, 0x1471, 0x8132, 0xA191, 0x2308, 0xB142, + 0x15C1, 0xD152, 0x24F0, 0x6233, 0x8272, 0x0A09, 0x1716, 0x1918, + 0x251A, 0x2726, 0x2928, 0x342A, 0x3635, 0x3837, 0x3A39, 0x4443, + 0x4645, 0x4847, 0x4A49, 0x5453, 0x5655, 0x5857, 0x5A59, 0x6463, + 0x6665, 0x6867, 0x6A69, 0x7473, 0x7675, 0x7877, 0x7A79, 0x8483, + 0x8685, 0x8887, 0x8A89, 0x9392, 0x9594, 0x9796, 0x9998, 0xA29A, + 0xA4A3, 0xA6A5, 0xA8A7, 0xAAA9, 0xB3B2, 0xB5B4, 0xB7B6, 0xB9B8, + 0xC2BA, 0xC4C3, 0xC6C5, 0xC8C7, 0xCAC9, 0xD3D2, 0xD5D4, 0xD7D6, + 0xD9D8, 0xE1DA, 0xE3E2, 0xE5E4, 0xE7E6, 0xE9E8, 0xF1EA, 0xF3F2, + 0xF5F4, 0xF7F6, 0xF9F8, 0xFFFA, + 0xC4FF, 0xB500, 0x0011, 0x0102, 0x0402, 0x0304, 0x0704, 0x0405, + 0x0004, 0x0201, 0x0077, 0x0201, 0x1103, 0x0504, 0x3121, 0x1206, + 0x5141, 0x6107, 0x1371, 0x3222, 0x0881, 0x4214, 0xA191, 0xC1B1, + 0x2309, 0x5233, 0x15F0, 0x7262, 0x0AD1, 0x2416, 0xE134, 0xF125, + 0x1817, 0x1A19, 0x2726, 0x2928, 0x352A, 0x3736, 0x3938, 0x433A, + 0x4544, 0x4746, 0x4948, 0x534A, 0x5554, 0x5756, 0x5958, 0x635A, + 0x6564, 0x6766, 0x6968, 0x736A, 0x7574, 0x7776, 0x7978, 0x827A, + 0x8483, 0x8685, 0x8887, 0x8A89, 0x9392, 0x9594, 0x9796, 0x9998, + 0xA29A, 0xA4A3, 0xA6A5, 0xA8A7, 0xAAA9, 0xB3B2, 0xB5B4, 0xB7B6, + 0xB9B8, 0xC2BA, 0xC4C3, 0xC6C5, 0xC8C7, 0xCAC9, 0xD3D2, 0xD5D4, + 0xD7D6, 0xD9D8, 0xE2DA, 0xE4E3, 0xE6E5, 0xE8E7, 0xEAE9, 0xF3F2, + 0xF5F4, 0xF7F6, 0xF9F8, 0xFFFA, + 0xC4FF, 0x1F00, 0x0000, 0x0501, 0x0101, 0x0101, 0x0101, 0x0000, + 0x0000, 0x0000, 0x0000, 0x0201, 0x0403, 0x0605, 0x0807, 0x0A09, + 0xFF0B, + 0xC4FF, 0x1F00, 0x0001, 0x0103, 0x0101, 0x0101, 0x0101, 0x0101, + 0x0000, 0x0000, 0x0000, 0x0201, 0x0403, 0x0605, 0x0807, 0x0A09, + 0xFF0B + }; + + *size = sizeof(tables); + return tables; +} + +/****************************************************************************/ +/* MCHIP low-level functions */ +/****************************************************************************/ + +/* waits for the specified miliseconds */ +static inline void wait_ms(unsigned int ms) { + if (!in_interrupt()) { + set_current_state(TASK_UNINTERRUPTIBLE); + schedule_timeout(1 + ms * HZ / 1000); + } + else + mdelay(ms); +} + +/* returns the horizontal capture size */ +static inline int mchip_hsize(void) { + return meye.params.subsample ? 320 : 640; +} + +/* returns the vertical capture size */ +static inline int mchip_vsize(void) { + return meye.params.subsample ? 240 : 480; +} + +/* waits for a register to be available */ +static void mchip_sync(int reg) { + u32 status; + int i; + + if (reg == MCHIP_MM_FIFO_DATA) { + for (i = 0; i < MCHIP_REG_TIMEOUT; i++) { + status = readl(meye.mchip_mmregs + MCHIP_MM_FIFO_STATUS); + if (!(status & MCHIP_MM_FIFO_WAIT)) { + printk(KERN_WARNING "meye: fifo not ready\n"); + return; + } + if (status & MCHIP_MM_FIFO_READY) + return; + udelay(1); + } + } + else if (reg > 0x80) { + u32 mask = (reg < 0x100) ? MCHIP_HIC_STATUS_MCC_RDY + : MCHIP_HIC_STATUS_VRJ_RDY; + for (i = 0; i < MCHIP_REG_TIMEOUT; i++) { + status = readl(meye.mchip_mmregs + MCHIP_HIC_STATUS); + if (status & mask) + return; + udelay(1); + } + } + else + return; + printk(KERN_WARNING "meye: mchip_sync() timeout on reg 0x%x status=0x%x\n", reg, status); +} + +/* sets a value into the register */ +static inline void mchip_set(int reg, u32 v) { + mchip_sync(reg); + writel(v, meye.mchip_mmregs + reg); +} + +/* get the register value */ +static inline u32 mchip_read(int reg) { + mchip_sync(reg); + return readl(meye.mchip_mmregs + reg); +} + +/* wait for a register to become a particular value */ +static inline int mchip_delay(u32 reg, u32 v) { + int n = 10; + while (--n && mchip_read(reg) != v) + udelay(1); + return n; +} + +/* setup subsampling */ +static void mchip_subsample(void) { + mchip_set(MCHIP_MCC_R_SAMPLING, meye.params.subsample); + mchip_set(MCHIP_MCC_R_XRANGE, mchip_hsize()); + mchip_set(MCHIP_MCC_R_YRANGE, mchip_vsize()); + mchip_set(MCHIP_MCC_B_XRANGE, mchip_hsize()); + mchip_set(MCHIP_MCC_B_YRANGE, mchip_vsize()); + mchip_delay(MCHIP_HIC_STATUS, MCHIP_HIC_STATUS_IDLE); +} + +/* set the framerate into the mchip */ +static void mchip_set_framerate(void) { + mchip_set(MCHIP_HIC_S_RATE, meye.params.framerate); +} + +/* load some huffman and quantisation tables into the VRJ chip ready + for JPEG compression */ +static void mchip_load_tables(void) { + int i; + int size; + u16 *tables; + + tables = jpeg_huffman_tables(&size); + for (i = 0; i < size / 2; i++) + writel(tables[i], meye.mchip_mmregs + MCHIP_VRJ_TABLE_DATA); + + tables = jpeg_quantisation_tables(&size, meye.params.quality); + for (i = 0; i < size / 2; i++) + writel(tables[i], meye.mchip_mmregs + MCHIP_VRJ_TABLE_DATA); +} + +/* setup the VRJ parameters in the chip */ +static void mchip_vrj_setup(u8 mode) { + + mchip_set(MCHIP_VRJ_BUS_MODE, 5); + mchip_set(MCHIP_VRJ_SIGNAL_ACTIVE_LEVEL, 0x1f); + mchip_set(MCHIP_VRJ_PDAT_USE, 1); + mchip_set(MCHIP_VRJ_IRQ_FLAG, 0xa0); + mchip_set(MCHIP_VRJ_MODE_SPECIFY, mode); + mchip_set(MCHIP_VRJ_NUM_LINES, mchip_vsize()); + mchip_set(MCHIP_VRJ_NUM_PIXELS, mchip_hsize()); + mchip_set(MCHIP_VRJ_NUM_COMPONENTS, 0x1b); + mchip_set(MCHIP_VRJ_LIMIT_COMPRESSED_LO, 0xFFFF); + mchip_set(MCHIP_VRJ_LIMIT_COMPRESSED_HI, 0xFFFF); + mchip_set(MCHIP_VRJ_COMP_DATA_FORMAT, 0xC); + mchip_set(MCHIP_VRJ_RESTART_INTERVAL, 0); + mchip_set(MCHIP_VRJ_SOF1, 0x601); + mchip_set(MCHIP_VRJ_SOF2, 0x1502); + mchip_set(MCHIP_VRJ_SOF3, 0x1503); + mchip_set(MCHIP_VRJ_SOF4, 0x1596); + mchip_set(MCHIP_VRJ_SOS, 0x0ed0); + + mchip_load_tables(); +} + +/* setup for DMA transfers - also zeros the framebuffer */ +static int mchip_dma_alloc(void) { + if (!meye.mchip_fbuffer) { + meye.mchip_fbuffer = ptable_alloc(MCHIP_NB_PAGES, + &meye.mchip_ptaddr); + if (!meye.mchip_fbuffer) + return -1; + } + return 0; +} + +/* frees the DMA buffer */ +static void mchip_dma_free(void) { + if (meye.mchip_fbuffer) { + ptable_free(meye.mchip_fbuffer, MCHIP_NB_PAGES); + meye.mchip_fbuffer = 0; + meye.mchip_ptaddr = 0; + } +} + +/* sets the DMA parameters into the chip */ +static void mchip_dma_setup(void) { + int i; + + mchip_set(MCHIP_MM_PT_ADDR, meye.mchip_ptaddr); + for (i = 0; i < 4; i++) + mchip_set(MCHIP_MM_FIR(i), 0); + meye.mchip_fnum = 0; +} + +/* stop any existing HIC action and wait for any dma to complete then + reset the dma engine */ +static void mchip_hic_stop(void) { + int i = 0; + + meye.mchip_mode = MCHIP_HIC_MODE_NOOP; + if (!(mchip_read(MCHIP_HIC_STATUS) & MCHIP_HIC_STATUS_BUSY)) + return; + mchip_set(MCHIP_HIC_CMD, MCHIP_HIC_CMD_STOP); + mchip_delay(MCHIP_HIC_CMD, 0); + while (!mchip_delay(MCHIP_HIC_STATUS, MCHIP_HIC_STATUS_IDLE)) { + /* resetting HIC */ + mchip_set(MCHIP_HIC_CMD, MCHIP_HIC_CMD_STOP); + mchip_delay(MCHIP_HIC_CMD, 0); + mchip_set(MCHIP_HIC_CTL, MCHIP_HIC_CTL_SOFT_RESET); + wait_ms(250); + if (i++ > 20) { + printk(KERN_ERR "meye: resetting HIC hanged!\n"); + break; + } + } + wait_ms(100); +} + +/****************************************************************************/ +/* MCHIP frame processing functions */ +/****************************************************************************/ + +/* get the next ready frame from the dma engine */ +static u32 mchip_get_frame(void) { + u32 v; + + v = mchip_read(MCHIP_MM_FIR(meye.mchip_fnum)); + return v; +} + +/* frees the current frame from the dma engine */ +static void mchip_free_frame(void) { + mchip_set(MCHIP_MM_FIR(meye.mchip_fnum), 0); + meye.mchip_fnum++; + meye.mchip_fnum %= 4; +} + + +/* read one frame from the framebuffer assuming it was captured using + a uncompressed transfer */ +static void mchip_cont_read_frame(u32 v, u8 *buf, int size) { + int pt_id; + int avail; + + pt_id = (v >> 17) & 0x3FF; + avail = MCHIP_NB_PAGES - pt_id; + + if (size > avail*MMUPAGE_SIZE) { + memcpy(buf, meye.mchip_fbuffer + pt_id * MMUPAGE_SIZE, + avail * MMUPAGE_SIZE); + memcpy(buf +avail * MMUPAGE_SIZE, meye.mchip_fbuffer, + size - avail * MMUPAGE_SIZE); + } + else + memcpy(buf, meye.mchip_fbuffer + pt_id * MMUPAGE_SIZE, size); +} + +/* read a compressed frame from the framebuffer */ +static int mchip_comp_read_frame(u32 v, u8 *buf, int size) { + int pt_start, pt_end, trailer; + int fsize, fsize2; + int i; + + pt_start = (v >> 19) & 0xFF; + pt_end = (v >> 11) & 0xFF; + trailer = (v >> 1) & 0x3FF; + + if (pt_end < pt_start) { + fsize = (MCHIP_NB_PAGES_MJPEG - pt_start) * MMUPAGE_SIZE; + fsize2 = pt_end * MMUPAGE_SIZE + trailer * 4; + if (fsize + fsize2 > size) { + printk(KERN_WARNING "meye: oversized compressed frame %d %d\n", + fsize, fsize2); + return -1; + } else { + memcpy(buf, meye.mchip_fbuffer + pt_start * MMUPAGE_SIZE, + fsize); + memcpy(buf + fsize, meye.mchip_fbuffer, fsize2); + fsize += fsize2; + } + } else { + fsize = (pt_end - pt_start) * MMUPAGE_SIZE + trailer * 4; + if (fsize > size) { + printk(KERN_WARNING "meye: oversized compressed frame %d\n", + fsize); + return -1; + } else + memcpy(buf, meye.mchip_fbuffer + pt_start * MMUPAGE_SIZE, + fsize); + } + + +#ifdef MEYE_JPEG_CORRECTION + + /* Some mchip generated jpeg frames are incorrect. In most + * (all ?) of those cases, the final EOI (0xff 0xd9) marker + * is not present at the end of the frame. + * + * Since adding the final marker is not enough to restore + * the jpeg integrity, we drop the frame. + */ + + for (i = fsize - 1; i > 0 && buf[i] == 0xff; i--) ; + + if (i < 2 || buf[i - 1] != 0xff || buf[i] != 0xd9) + return -1; + +#endif + + return fsize; +} + +/* take a picture into SDRAM */ +static void mchip_take_picture(void) { + int i; + + mchip_hic_stop(); + mchip_subsample(); + mchip_dma_setup(); + + mchip_set(MCHIP_HIC_MODE, MCHIP_HIC_MODE_STILL_CAP); + mchip_set(MCHIP_HIC_CMD, MCHIP_HIC_CMD_START); + + mchip_delay(MCHIP_HIC_CMD, 0); + + for (i = 0; i < 100; ++i) { + if (mchip_delay(MCHIP_HIC_STATUS, MCHIP_HIC_STATUS_IDLE)) + break; + wait_ms(1); + } +} + +/* dma a previously taken picture into a buffer */ +static void mchip_get_picture(u8 *buf, int bufsize) { + u32 v; + int i; + + mchip_set(MCHIP_HIC_MODE, MCHIP_HIC_MODE_STILL_OUT); + mchip_set(MCHIP_HIC_CMD, MCHIP_HIC_CMD_START); + + mchip_delay(MCHIP_HIC_CMD, 0); + for (i = 0; i < 100; ++i) { + if (mchip_delay(MCHIP_HIC_STATUS, MCHIP_HIC_STATUS_IDLE)) + break; + wait_ms(1); + } + for (i = 0; i < 4 ; ++i) { + v = mchip_get_frame(); + if (v & MCHIP_MM_FIR_RDY) { + mchip_cont_read_frame(v, buf, bufsize); + break; + } + mchip_free_frame(); + } +} + +/* start continuous dma capture */ +static void mchip_continuous_start(void) { + mchip_hic_stop(); + mchip_subsample(); + mchip_set_framerate(); + mchip_dma_setup(); + + meye.mchip_mode = MCHIP_HIC_MODE_CONT_OUT; + + mchip_set(MCHIP_HIC_MODE, MCHIP_HIC_MODE_CONT_OUT); + mchip_set(MCHIP_HIC_CMD, MCHIP_HIC_CMD_START); + + mchip_delay(MCHIP_HIC_CMD, 0); +} + +/* compress one frame into a buffer */ +static int mchip_compress_frame(u8 *buf, int bufsize) { + u32 v; + int len = -1, i; + + mchip_vrj_setup(0x3f); + udelay(50); + + mchip_set(MCHIP_HIC_MODE, MCHIP_HIC_MODE_STILL_COMP); + mchip_set(MCHIP_HIC_CMD, MCHIP_HIC_CMD_START); + + mchip_delay(MCHIP_HIC_CMD, 0); + for (i = 0; i < 100; ++i) { + if (mchip_delay(MCHIP_HIC_STATUS, MCHIP_HIC_STATUS_IDLE)) + break; + wait_ms(1); + } + + for (i = 0; i < 4 ; ++i) { + v = mchip_get_frame(); + if (v & MCHIP_MM_FIR_RDY) { + len = mchip_comp_read_frame(v, buf, bufsize); + break; + } + mchip_free_frame(); + } + return len; +} + +#if 0 +/* uncompress one image into a buffer */ +static int mchip_uncompress_frame(u8 *img, int imgsize, u8 *buf, int bufsize) { + mchip_vrj_setup(0x3f); + udelay(50); + + mchip_set(MCHIP_HIC_MODE, MCHIP_HIC_MODE_STILL_DECOMP); + mchip_set(MCHIP_HIC_CMD, MCHIP_HIC_CMD_START); + + mchip_delay(MCHIP_HIC_CMD, 0); + + return mchip_comp_read_frame(buf, bufsize); +} +#endif + +/* start continuous compressed capture */ +static void mchip_cont_compression_start(void) { + mchip_hic_stop(); + mchip_vrj_setup(0x3f); + mchip_subsample(); + mchip_set_framerate(); + mchip_dma_setup(); + + meye.mchip_mode = MCHIP_HIC_MODE_CONT_COMP; + + mchip_set(MCHIP_HIC_MODE, MCHIP_HIC_MODE_CONT_COMP); + mchip_set(MCHIP_HIC_CMD, MCHIP_HIC_CMD_START); + + mchip_delay(MCHIP_HIC_CMD, 0); +} + +/****************************************************************************/ +/* Interrupt handling */ +/****************************************************************************/ + +static void meye_irq(int irq, void *dev_id, struct pt_regs *regs) { + u32 v; + int reqnr; + v = mchip_read(MCHIP_MM_INTA); + + while (1) { + v = mchip_get_frame(); + if (!(v & MCHIP_MM_FIR_RDY)) + goto out; + switch (meye.mchip_mode) { + + case MCHIP_HIC_MODE_CONT_OUT: + if (!meye_emptyq(&meye.grabq, NULL)) { + int nr = meye_pullq(&meye.grabq); + mchip_cont_read_frame( + v, + meye.grab_fbuffer + gbufsize * nr, + mchip_hsize() * mchip_vsize() * 2); + meye.grab_buffer[nr].state = MEYE_BUF_DONE; + wake_up_interruptible(&meye.grabq.proc_list); + } + break; + + case MCHIP_HIC_MODE_CONT_COMP: + if (!meye_emptyq(&meye.grabq, &reqnr)) { + int size; + size = mchip_comp_read_frame( + v, + meye.grab_fbuffer + gbufsize * reqnr, + gbufsize); + if (size == -1) + break; + reqnr = meye_pullq(&meye.grabq); + meye.grab_buffer[reqnr].size = size; + meye.grab_buffer[reqnr].state = MEYE_BUF_DONE; + wake_up_interruptible(&meye.grabq.proc_list); + } + break; + + default: + /* do not free frame, since it can be a snap */ + goto out; + } /* switch */ + + mchip_free_frame(); + } +out: +} + +/****************************************************************************/ +/* video4linux integration */ +/****************************************************************************/ + +static int meye_open(struct video_device *dev, int flags) { + int i; + + down(&meye.lock); + if (meye.open_count) { + up(&meye.lock); + return -EBUSY; + } + meye.open_count++; + if (mchip_dma_alloc()) { + printk(KERN_ERR "meye: mchip framebuffer allocation failed\n"); + up(&meye.lock); + return -ENOBUFS; + } + mchip_hic_stop(); + meye_initq(&meye.grabq); + for (i = 0; i < MEYE_MAX_BUFNBRS; i++) + meye.grab_buffer[i].state = MEYE_BUF_UNUSED; + up(&meye.lock); + return 0; +} + +static void meye_close(struct video_device *dev) { + down(&meye.lock); + meye.open_count--; + mchip_hic_stop(); + up(&meye.lock); +} + +static int meye_ioctl(struct video_device *dev, unsigned int cmd, void *arg) { + + switch (cmd) { + + case VIDIOCGCAP: { + struct video_capability b; + strcpy(b.name,meye.video_dev.name); + b.type = VID_TYPE_CAPTURE; + b.channels = 1; + b.audios = 0; + b.maxwidth = 640; + b.maxheight = 480; + b.minwidth = 320; + b.minheight = 240; + if(copy_to_user(arg,&b,sizeof(b))) + return -EFAULT; + break; + } + + case VIDIOCGCHAN: { + struct video_channel v; + if(copy_from_user(&v, arg,sizeof(v))) + return -EFAULT; + v.flags = 0; + v.tuners = 0; + v.type = VIDEO_TYPE_CAMERA; + if (v.channel != 0) + return -EINVAL; + strcpy(v.name,"Camera"); + if(copy_to_user(arg,&v,sizeof(v))) + return -EFAULT; + break; + } + + case VIDIOCSCHAN: { + struct video_channel v; + if(copy_from_user(&v, arg,sizeof(v))) + return -EFAULT; + if (v.channel != 0) + return -EINVAL; + break; + } + + case VIDIOCGPICT: { + struct video_picture p = meye.picture; + if(copy_to_user(arg, &p, sizeof(p))) + return -EFAULT; + break; + } + + case VIDIOCSPICT: { + struct video_picture p; + if(copy_from_user(&p, arg,sizeof(p))) + return -EFAULT; + if (p.depth != 2) + return -EINVAL; + if (p.palette != VIDEO_PALETTE_YUV422) + return -EINVAL; + down(&meye.lock); + sonypi_camera_command(SONYPI_COMMAND_SETCAMERABRIGHTNESS, + p.brightness >> 10); + sonypi_camera_command(SONYPI_COMMAND_SETCAMERAHUE, + p.hue >> 10); + sonypi_camera_command(SONYPI_COMMAND_SETCAMERACOLOR, + p.colour >> 10); + sonypi_camera_command(SONYPI_COMMAND_SETCAMERACONTRAST, + p.contrast >> 10); + memcpy(&meye.picture, &p, sizeof(p)); + up(&meye.lock); + break; + } + + case VIDIOCSYNC: { + int i; + DECLARE_WAITQUEUE(wait, current); + + if(copy_from_user((void *)&i,arg,sizeof(int))) + return -EFAULT; + if (i < 0 || i >= gbuffers) + return -EINVAL; + + switch (meye.grab_buffer[i].state) { + + case MEYE_BUF_UNUSED: + return -EINVAL; + case MEYE_BUF_USING: + add_wait_queue(&meye.grabq.proc_list, &wait); + current->state = TASK_INTERRUPTIBLE; + while (meye.grab_buffer[i].state == MEYE_BUF_USING) { + schedule(); + if(signal_pending(current)) { + remove_wait_queue(&meye.grabq.proc_list, &wait); + current->state = TASK_RUNNING; + return -EINTR; + } + } + remove_wait_queue(&meye.grabq.proc_list, &wait); + current->state = TASK_RUNNING; + /* fall through */ + case MEYE_BUF_DONE: + meye.grab_buffer[i].state = MEYE_BUF_UNUSED; + } + break; + } + + case VIDIOCMCAPTURE: { + struct video_mmap vm; + int restart = 0; + + if(copy_from_user((void *) &vm, (void *) arg, sizeof(vm))) + return -EFAULT; + if (vm.frame >= gbuffers || vm.frame < 0) + return -EINVAL; + if (vm.format != VIDEO_PALETTE_YUV422) + return -EINVAL; + if (vm.height * vm.width * 2 > gbufsize) + return -EINVAL; + if (!meye.grab_fbuffer) + return -EINVAL; + if (meye.grab_buffer[vm.frame].state != MEYE_BUF_UNUSED) + return -EBUSY; + + down(&meye.lock); + if (vm.width == 640 && vm.height == 480) { + if (meye.params.subsample) { + meye.params.subsample = 0; + restart = 1; + } + } + else if (vm.width == 320 && vm.height == 240) { + if (!meye.params.subsample) { + meye.params.subsample = 1; + restart = 1; + } + } + else { + up(&meye.lock); + return -EINVAL; + } + + if (restart || meye.mchip_mode != MCHIP_HIC_MODE_CONT_OUT) + mchip_continuous_start(); + meye.grab_buffer[vm.frame].state = MEYE_BUF_USING; + meye_pushq(&meye.grabq, vm.frame); + up(&meye.lock); + break; + } + + case VIDIOCGMBUF: { + struct video_mbuf vm; + int i; + + memset(&vm, 0 , sizeof(vm)); + vm.size = gbufsize * gbuffers; + vm.frames = gbuffers; + for (i = 0; i < gbuffers; i++) + vm.offsets[i] = i * gbufsize; + if(copy_to_user((void *)arg, (void *)&vm, sizeof(vm))) + return -EFAULT; + break; + } + + case MEYEIOC_G_PARAMS: { + if (copy_to_user(arg, &meye.params, sizeof(meye.params))) + return -EFAULT; + break; + } + + case MEYEIOC_S_PARAMS: { + struct meye_params jp; + if (copy_from_user(&jp, arg, sizeof(jp))) + return -EFAULT; + if (jp.subsample > 1) + return -EINVAL; + if (jp.quality > 10) + return -EINVAL; + if (jp.sharpness > 63 || jp.agc > 63 || jp.picture > 63) + return -EINVAL; + if (jp.framerate > 31) + return -EINVAL; + down(&meye.lock); + if (meye.params.subsample != jp.subsample || + meye.params.quality != jp.quality) + mchip_hic_stop(); /* need restart */ + memcpy(&meye.params, &jp, sizeof(jp)); + sonypi_camera_command(SONYPI_COMMAND_SETCAMERASHARPNESS, + meye.params.sharpness); + sonypi_camera_command(SONYPI_COMMAND_SETCAMERAAGC, + meye.params.agc); + sonypi_camera_command(SONYPI_COMMAND_SETCAMERAPICTURE, + meye.params.picture); + up(&meye.lock); + break; + } + + case MEYEIOC_QBUF_CAPT: { + int nb; + + if (copy_from_user((void *) &nb, (void *) arg, sizeof(int))) + return -EFAULT; + + if (!meye.grab_fbuffer) + return -EINVAL; + if (nb >= gbuffers) + return -EINVAL; + if (nb < 0) { + /* stop capture */ + mchip_hic_stop(); + return 0; + } + if (meye.grab_buffer[nb].state != MEYE_BUF_UNUSED) + return -EBUSY; + down(&meye.lock); + if (meye.mchip_mode != MCHIP_HIC_MODE_CONT_COMP) + mchip_cont_compression_start(); + meye.grab_buffer[nb].state = MEYE_BUF_USING; + meye_pushq(&meye.grabq, nb); + up(&meye.lock); + break; + } + + case MEYEIOC_SYNC: { + int i; + DECLARE_WAITQUEUE(wait, current); + + if(copy_from_user((void *)&i,arg,sizeof(int))) + return -EFAULT; + if (i < 0 || i >= gbuffers) + return -EINVAL; + + switch (meye.grab_buffer[i].state) { + + case MEYE_BUF_UNUSED: + return -EINVAL; + case MEYE_BUF_USING: + add_wait_queue(&meye.grabq.proc_list, &wait); + current->state = TASK_INTERRUPTIBLE; + while (meye.grab_buffer[i].state == MEYE_BUF_USING) { + schedule(); + if(signal_pending(current)) { + remove_wait_queue(&meye.grabq.proc_list, &wait); + current->state = TASK_RUNNING; + return -EINTR; + } + } + remove_wait_queue(&meye.grabq.proc_list, &wait); + current->state = TASK_RUNNING; + /* fall through */ + case MEYE_BUF_DONE: + meye.grab_buffer[i].state = MEYE_BUF_UNUSED; + } + i = meye.grab_buffer[i].size; + if (copy_to_user(arg, (void *)&i, sizeof(int))) + return -EFAULT; + break; + } + + case MEYEIOC_STILLCAPT: { + + if (!meye.grab_fbuffer) + return -EINVAL; + if (meye.grab_buffer[0].state != MEYE_BUF_UNUSED) + return -EBUSY; + down(&meye.lock); + meye.grab_buffer[0].state = MEYE_BUF_USING; + mchip_take_picture(); + mchip_get_picture( + meye.grab_fbuffer, + mchip_hsize() * mchip_vsize() * 2); + meye.grab_buffer[0].state = MEYE_BUF_DONE; + up(&meye.lock); + break; + } + + case MEYEIOC_STILLJCAPT: { + int len = -1; + + if (!meye.grab_fbuffer) + return -EINVAL; + if (meye.grab_buffer[0].state != MEYE_BUF_UNUSED) + return -EBUSY; + down(&meye.lock); + meye.grab_buffer[0].state = MEYE_BUF_USING; + while (len == -1) { + mchip_take_picture(); + len = mchip_compress_frame(meye.grab_fbuffer, gbufsize); + } + meye.grab_buffer[0].state = MEYE_BUF_DONE; + up(&meye.lock); + if (copy_to_user(arg, (void *)&len, sizeof(int))) + return -EFAULT; + break; + } + + default: + return -ENOIOCTLCMD; + + } /* switch */ + + return 0; +} + +static int meye_mmap(struct video_device *dev, const char *adr, + unsigned long size) { + int retval; + + down(&meye.lock); + if (size > gbuffers * gbufsize) { + up(&meye.lock); + return -EINVAL; + } + if (!meye.grab_fbuffer) { + /* lazy allocation */ + meye.grab_fbuffer = rvmalloc(gbuffers*gbufsize); + if (!meye.grab_fbuffer) { + printk(KERN_ERR "meye: v4l framebuffer allocation failed\n"); + up(&meye.lock); + return -ENOMEM; + } + } + pos = (unsigned long)meye.grab_fbuffer; + + while (size > 0) { + page = kvirt_to_pa(pos); + if (remap_page_range(start, page, PAGE_SIZE, PAGE_SHARED)) { + up(&meye.lock); + return -EAGAIN; + } + start += PAGE_SIZE; + pos += PAGE_SIZE; + size -= PAGE_SIZE; + } + up(&meye.lock); + return 0; +} + +static struct video_device meye_template = { + owner: THIS_MODULE, + name: "meye", + type: VID_TYPE_CAPTURE, + hardware: VID_HARDWARE_MEYE, + open: meye_open, + close: meye_close, + ioctl: meye_ioctl, + mmap: meye_mmap, +}; + +static int __devinit meye_probe(struct pci_dev *pcidev, + const struct pci_device_id *ent) { + int ret; + unsigned long mchip_adr; + u8 revision; + + if (meye.mchip_dev != NULL) { + printk(KERN_ERR "meye: only one device allowed!\n"); + ret = -EBUSY; + goto out1; + } + + sonypi_camera_command(SONYPI_COMMAND_SETCAMERA, 1); + + meye.mchip_dev = pcidev; + meye.mchip_irq = pcidev->irq; + memcpy(&meye.video_dev, &meye_template, sizeof(meye_template)); + + if (mchip_dma_alloc()) { + printk(KERN_ERR "meye: mchip framebuffer allocation failed\n"); + ret = -ENOMEM; + goto out2; + } + + if ((ret = pci_enable_device(meye.mchip_dev))) { + printk(KERN_ERR "meye: pci_enable_device failed\n"); + goto out3; + } + + mchip_adr = pci_resource_start(meye.mchip_dev,0); + if (!mchip_adr) { + printk(KERN_ERR "meye: mchip has no device base address\n"); + ret = -EIO; + goto out4; + } + if (!request_mem_region(pci_resource_start(meye.mchip_dev, 0), + pci_resource_len(meye.mchip_dev, 0), + "meye")) { + ret = -EIO; + printk(KERN_ERR "meye: request_mem_region failed\n"); + goto out4; + } + + pci_read_config_byte(meye.mchip_dev, PCI_REVISION_ID, &revision); + + pci_set_master(meye.mchip_dev); + + pci_write_config_byte(meye.mchip_dev, PCI_CACHE_LINE_SIZE, 8); + pci_write_config_byte(meye.mchip_dev, PCI_LATENCY_TIMER, 64); + + if ((ret = request_irq(meye.mchip_irq, meye_irq, + SA_INTERRUPT | SA_SHIRQ, "meye", meye_irq))) { + printk(KERN_ERR "meye: request_irq failed (ret=%d)\n", ret); + goto out5; + } + + meye.mchip_mmregs = ioremap(mchip_adr, MCHIP_MM_REGS); + if (!meye.mchip_mmregs) { + printk(KERN_ERR "meye: ioremap failed\n"); + ret = -EIO; + goto out6; + } + + /* Ask the camera to perform a soft reset. */ + pci_write_config_word(meye.mchip_dev, MCHIP_PCI_SOFTRESET_SET, 1); + + mchip_delay(MCHIP_HIC_CMD, 0); + mchip_delay(MCHIP_HIC_STATUS, MCHIP_HIC_STATUS_IDLE); + + wait_ms(1); + mchip_set(MCHIP_VRJ_SOFT_RESET, 1); + + wait_ms(1); + mchip_set(MCHIP_MM_PCI_MODE, 5); + + wait_ms(1); + mchip_set(MCHIP_MM_INTA, MCHIP_MM_INTA_HIC_1_MASK); + + if (video_register_device(&meye.video_dev, VFL_TYPE_GRABBER, video_nr) < 0) { + + printk(KERN_ERR "meye: video_register_device failed\n"); + ret = -EIO; + goto out7; + } + + printk(KERN_INFO "meye: Motion Eye Camera Driver v%d.%d.\n", + MEYE_DRIVER_MAJORVERSION, + MEYE_DRIVER_MINORVERSION); + printk(KERN_INFO "meye: mchip KL5A72002 rev. %d, base %lx, irq %d\n", + revision, mchip_adr, meye.mchip_irq); + + /* init all fields */ + init_MUTEX(&meye.lock); + + meye.picture.depth = 2; + meye.picture.palette = VIDEO_PALETTE_YUV422; + meye.picture.brightness = 32 << 10; + meye.picture.hue = 32 << 10; + meye.picture.colour = 32 << 10; + meye.picture.contrast = 32 << 10; + meye.picture.whiteness = 0; + meye.params.subsample = 0; + meye.params.quality = 7; + meye.params.sharpness = 32; + meye.params.agc = 48; + meye.params.picture = 0; + meye.params.framerate = 0; + sonypi_camera_command(SONYPI_COMMAND_SETCAMERABRIGHTNESS, 32); + sonypi_camera_command(SONYPI_COMMAND_SETCAMERAHUE, 32); + sonypi_camera_command(SONYPI_COMMAND_SETCAMERACOLOR, 32); + sonypi_camera_command(SONYPI_COMMAND_SETCAMERACONTRAST, 32); + sonypi_camera_command(SONYPI_COMMAND_SETCAMERASHARPNESS, 32); + sonypi_camera_command(SONYPI_COMMAND_SETCAMERAPICTURE, 0); + sonypi_camera_command(SONYPI_COMMAND_SETCAMERAAGC, 48); + + return 0; +out7: + iounmap(meye.mchip_mmregs); +out6: + free_irq(meye.mchip_irq, meye_irq); +out5: + release_mem_region(pci_resource_start(meye.mchip_dev, 0), + pci_resource_len(meye.mchip_dev, 0)); +out4: + pci_disable_device(meye.mchip_dev); +out3: + mchip_dma_free(); +out2: + sonypi_camera_command(SONYPI_COMMAND_SETCAMERA, 0); +out1: + return ret; +} + +static void __devexit meye_remove(struct pci_dev *pcidev) { + + video_unregister_device(&meye.video_dev); + + mchip_hic_stop(); + + /* disable interrupts */ + mchip_set(MCHIP_MM_INTA, 0x0); + + free_irq(meye.mchip_irq, meye_irq); + + + iounmap(meye.mchip_mmregs); + + release_mem_region(pci_resource_start(meye.mchip_dev, 0), + pci_resource_len(meye.mchip_dev, 0)); + + pci_disable_device(meye.mchip_dev); + + mchip_dma_free(); + + if (meye.grab_fbuffer) + rvfree(meye.grab_fbuffer, gbuffers*gbufsize); + + sonypi_camera_command(SONYPI_COMMAND_SETCAMERA, 0); + + printk(KERN_INFO "meye: removed\n"); +} + +static struct pci_device_id meye_pci_tbl[] __devinitdata = { + { PCI_VENDOR_ID_KAWASAKI, PCI_DEVICE_ID_MCHIP_KL5A72002, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 }, + { } +}; + +MODULE_DEVICE_TABLE(pci, meye_pci_tbl); + +static struct pci_driver meye_driver = { + name: "meye", + id_table: meye_pci_tbl, + probe: meye_probe, + remove: meye_remove, +}; + +static int __init meye_init_module(void) { + if (gbuffers < 2) + gbuffers = 2; + if (gbuffers > MEYE_MAX_BUFNBRS) + gbuffers = MEYE_MAX_BUFNBRS; + if (gbufsize < 0 || gbufsize > MEYE_MAX_BUFSIZE) + gbufsize = MEYE_MAX_BUFSIZE; + printk(KERN_INFO "meye: using %d buffers with %dk (%dk total) for capture\n", + gbuffers, gbufsize/1024, gbuffers*gbufsize/1024); + return pci_module_init(&meye_driver); +} + +static void __exit meye_cleanup_module(void) { + pci_unregister_driver(&meye_driver); +} + +MODULE_AUTHOR("Stelian Pop "); +MODULE_DESCRIPTION("video4linux driver for the MotionEye camera"); + +MODULE_PARM(gbuffers,"i"); +MODULE_PARM_DESC(gbuffers,"number of capture buffers, default is 2 (32 max)"); +MODULE_PARM(gbufsize,"i"); +MODULE_PARM_DESC(gbufsize,"size of the capture buffers, default is 614400"); +MODULE_PARM(video_nr,"i"); +MODULE_PARM_DESC(video_nr,"video device to register (0=/dev/video0, etc)"); + +/* Module entry points */ +module_init(meye_init_module); +module_exit(meye_cleanup_module); diff -urpN linux-2.4.9-linus/drivers/media/video/planb.c linux-2.4.9-larpage/drivers/media/video/planb.c --- linux-2.4.9-linus/drivers/media/video/planb.c 2001-06-27 17:10:55.000000000 -0700 +++ linux-2.4.9-larpage/drivers/media/video/planb.c 2002-11-20 02:02:48.000000000 -0800 @@ -1991,6 +1991,7 @@ static int planb_mmap(struct video_devic int i; struct planb *pb = (struct planb *)dev; unsigned long start = (unsigned long)adr; + unsigned long map_size; if (size > MAX_GBUFFERS * PLANB_MAX_FBUF) return -EINVAL; @@ -1999,14 +2000,15 @@ static int planb_mmap(struct video_devic if((err=grabbuf_alloc(pb))) return err; } - for (i = 0; i < pb->rawbuf_size; i++) { + map_size = PAGE_SIZE; + for (i = 0; size && i < pb->rawbuf_size; i++) { + if (size < PAGE_SIZE) + map_size = size; + size -= map_size; if (remap_page_range(start, virt_to_phys((void *)pb->rawbuf[i]), - PAGE_SIZE, PAGE_SHARED)) + map_size, PAGE_SHARED)) return -EAGAIN; start += PAGE_SIZE; - if (size <= PAGE_SIZE) - break; - size -= PAGE_SIZE; } return 0; } diff -urpN linux-2.4.9-linus/drivers/media/video/planb.c.orig linux-2.4.9-larpage/drivers/media/video/planb.c.orig --- linux-2.4.9-linus/drivers/media/video/planb.c.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/drivers/media/video/planb.c.orig 2002-11-20 02:02:48.000000000 -0800 @@ -0,0 +1,2305 @@ +/* + planb - PlanB frame grabber driver + + PlanB is used in the 7x00/8x00 series of PowerMacintosh + Computers as video input DMA controller. + + Copyright (C) 1998 Michel Lanners (mlan@cpu.lu) + + Based largely on the bttv driver by Ralph Metzler (rjkm@thp.uni-koeln.de) + + Additional debugging and coding by Takashi Oe (toe@unlserve.unl.edu) + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +*/ + +/* $Id: planb.c,v 1.18 1999/05/02 17:36:34 mlan Exp $ */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "planb.h" +#include "saa7196.h" + + +/* Would you mind for some ugly debugging? */ +//#define DEBUG(x...) printk(KERN_DEBUG ## x) /* Debug driver */ +#define DEBUG(x...) /* Don't debug driver */ +//#define IDEBUG(x...) printk(KERN_DEBUG ## x) /* Debug interrupt part */ +#define IDEBUG(x...) /* Don't debug interrupt part */ + +/* Ever seen a Mac with more than 1 of these? */ +#define PLANB_MAX 1 + +static int planb_num; +static struct planb planbs[PLANB_MAX]; +static volatile struct planb_registers *planb_regs; + +static int def_norm = PLANB_DEF_NORM; /* default norm */ +static int video_nr = -1; + +MODULE_PARM(def_norm, "i"); +MODULE_PARM_DESC(def_norm, "Default startup norm (0=PAL, 1=NTSC, 2=SECAM)"); +MODULE_PARM(video_nr,"i"); + +/* ------------------ PlanB Exported Functions ------------------ */ +static long planb_write(struct video_device *, const char *, unsigned long, int); +static long planb_read(struct video_device *, char *, unsigned long, int); +static int planb_open(struct video_device *, int); +static void planb_close(struct video_device *); +static int planb_ioctl(struct video_device *, unsigned int, void *); +static int planb_init_done(struct video_device *); +static int planb_mmap(struct video_device *, const char *, unsigned long); +static void planb_irq(int, void *, struct pt_regs *); +static void release_planb(void); +int init_planbs(struct video_init *); + +/* ------------------ PlanB Internal Functions ------------------ */ +static int planb_prepare_open(struct planb *); +static void planb_prepare_close(struct planb *); +static void saa_write_reg(unsigned char, unsigned char); +static unsigned char saa_status(int, struct planb *); +static void saa_set(unsigned char, unsigned char, struct planb *); +static void saa_init_regs(struct planb *); +static int grabbuf_alloc(struct planb *); +static int vgrab(struct planb *, struct video_mmap *); +static void add_clip(struct planb *, struct video_clip *); +static void fill_cmd_buff(struct planb *); +static void cmd_buff(struct planb *); +static volatile struct dbdma_cmd *setup_grab_cmd(int, struct planb *); +static void overlay_start(struct planb *); +static void overlay_stop(struct planb *); +static inline void tab_cmd_dbdma(volatile struct dbdma_cmd *, unsigned short, + unsigned int); +static inline void tab_cmd_store(volatile struct dbdma_cmd *, unsigned int, + unsigned int); +static inline void tab_cmd_gen(volatile struct dbdma_cmd *, unsigned short, + unsigned short, unsigned int, unsigned int); +static int init_planb(struct planb *); +static int find_planb(void); +static void planb_pre_capture(int, int, struct planb *); +static volatile struct dbdma_cmd *cmd_geo_setup(volatile struct dbdma_cmd *, + int, int, int, int, int, struct planb *); +static inline void planb_dbdma_stop(volatile struct dbdma_regs *); +static unsigned int saa_geo_setup(int, int, int, int, struct planb *); +static inline int overlay_is_active(struct planb *); + +/*******************************/ +/* Memory management functions */ +/*******************************/ + +static int grabbuf_alloc(struct planb *pb) +{ + int i, npage; + + npage = MAX_GBUFFERS * ((PLANB_MAX_FBUF / PAGE_SIZE + 1) +#ifndef PLANB_GSCANLINE + + MAX_LNUM +#endif /* PLANB_GSCANLINE */ + ); + if ((pb->rawbuf = (unsigned char**) kmalloc (npage + * sizeof(unsigned long), GFP_KERNEL)) == 0) + return -ENOMEM; + for (i = 0; i < npage; i++) { + pb->rawbuf[i] = (unsigned char *)__get_free_pages(GFP_KERNEL + |GFP_DMA, 0); + if (!pb->rawbuf[i]) + break; + mem_map_reserve(virt_to_page(pb->rawbuf[i])); + } + if (i-- < npage) { + printk(KERN_DEBUG "PlanB: init_grab: grab buffer not allocated\n"); + for (; i > 0; i--) { + mem_map_unreserve(virt_to_page(pb->rawbuf[i])); + free_pages((unsigned long)pb->rawbuf[i], 0); + } + kfree(pb->rawbuf); + return -ENOBUFS; + } + pb->rawbuf_size = npage; + return 0; +} + +/*****************************/ +/* Hardware access functions */ +/*****************************/ + +static void saa_write_reg(unsigned char addr, unsigned char val) +{ + planb_regs->saa_addr = addr; eieio(); + planb_regs->saa_regval = val; eieio(); + return; +} + +/* return status byte 0 or 1: */ +static unsigned char saa_status(int byte, struct planb *pb) +{ + saa_regs[pb->win.norm][SAA7196_STDC] = + (saa_regs[pb->win.norm][SAA7196_STDC] & ~2) | ((byte & 1) << 1); + saa_write_reg (SAA7196_STDC, saa_regs[pb->win.norm][SAA7196_STDC]); + + /* Let's wait 30msec for this one */ + current->state = TASK_INTERRUPTIBLE; +#if LINUX_VERSION_CODE >= 0x02017F + schedule_timeout(30 * HZ / 1000); +#else + current->timeout = jiffies + 30 * HZ / 1000; /* 30 ms */; + schedule(); +#endif + + return (unsigned char)in_8 (&planb_regs->saa_status); +} + +static void saa_set(unsigned char addr, unsigned char val, struct planb *pb) +{ + if(saa_regs[pb->win.norm][addr] != val) { + saa_regs[pb->win.norm][addr] = val; + saa_write_reg (addr, val); + } + return; +} + +static void saa_init_regs(struct planb *pb) +{ + int i; + + for (i = 0; i < SAA7196_NUMREGS; i++) + saa_write_reg (i, saa_regs[pb->win.norm][i]); +} + +static unsigned int saa_geo_setup(int width, int height, int interlace, int bpp, + struct planb *pb) +{ + int ht, norm = pb->win.norm; + + switch(bpp) { + case 2: + /* RGB555+a 1x16-bit + 16-bit transparent */ + saa_regs[norm][SAA7196_FMTS] &= ~0x3; + break; + case 1: + case 4: + /* RGB888 1x24-bit + 8-bit transparent */ + saa_regs[norm][SAA7196_FMTS] &= ~0x1; + saa_regs[norm][SAA7196_FMTS] |= 0x2; + break; + default: + return -EINVAL; + } + ht = (interlace ? height / 2 : height); + saa_regs[norm][SAA7196_OUTPIX] = (unsigned char) (width & 0x00ff); + saa_regs[norm][SAA7196_HFILT] = (saa_regs[norm][SAA7196_HFILT] & ~0x3) + | (width >> 8 & 0x3); + saa_regs[norm][SAA7196_OUTLINE] = (unsigned char) (ht & 0xff); + saa_regs[norm][SAA7196_VYP] = (saa_regs[norm][SAA7196_VYP] & ~0x3) + | (ht >> 8 & 0x3); + /* feed both fields if interlaced, or else feed only even fields */ + saa_regs[norm][SAA7196_FMTS] = (interlace) ? + (saa_regs[norm][SAA7196_FMTS] & ~0x60) + : (saa_regs[norm][SAA7196_FMTS] | 0x60); + /* transparent mode; extended format enabled */ + saa_regs[norm][SAA7196_DPATH] |= 0x3; + + return 0; +} + +/***************************/ +/* DBDMA support functions */ +/***************************/ + +static inline void planb_dbdma_restart(volatile struct dbdma_regs *ch) +{ + out_le32(&ch->control, PLANB_CLR(RUN)); + out_le32(&ch->control, PLANB_SET(RUN|WAKE) | PLANB_CLR(PAUSE)); +} + +static inline void planb_dbdma_stop(volatile struct dbdma_regs *ch) +{ + int i = 0; + + out_le32(&ch->control, PLANB_CLR(RUN) | PLANB_SET(FLUSH)); + while((in_le32(&ch->status) == (ACTIVE | FLUSH)) && (i < 999)) { + IDEBUG("PlanB: waiting for DMA to stop\n"); + i++; + } +} + +static inline void tab_cmd_dbdma(volatile struct dbdma_cmd *ch, + unsigned short command, unsigned int cmd_dep) +{ + st_le16(&ch->command, command); + st_le32(&ch->cmd_dep, cmd_dep); +} + +static inline void tab_cmd_store(volatile struct dbdma_cmd *ch, + unsigned int phy_addr, unsigned int cmd_dep) +{ + st_le16(&ch->command, STORE_WORD | KEY_SYSTEM); + st_le16(&ch->req_count, 4); + st_le32(&ch->phy_addr, phy_addr); + st_le32(&ch->cmd_dep, cmd_dep); +} + +static inline void tab_cmd_gen(volatile struct dbdma_cmd *ch, + unsigned short command, unsigned short req_count, + unsigned int phy_addr, unsigned int cmd_dep) +{ + st_le16(&ch->command, command); + st_le16(&ch->req_count, req_count); + st_le32(&ch->phy_addr, phy_addr); + st_le32(&ch->cmd_dep, cmd_dep); +} + +static volatile struct dbdma_cmd *cmd_geo_setup( + volatile struct dbdma_cmd *c1, int width, int height, int interlace, + int bpp, int clip, struct planb *pb) +{ + int norm = pb->win.norm; + + if((saa_geo_setup(width, height, interlace, bpp, pb)) != 0) + return (volatile struct dbdma_cmd *)NULL; + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->saa_addr), + SAA7196_FMTS); + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->saa_regval), + saa_regs[norm][SAA7196_FMTS]); + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->saa_addr), + SAA7196_DPATH); + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->saa_regval), + saa_regs[norm][SAA7196_DPATH]); + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->even), + bpp | ((clip)? PLANB_CLIPMASK: 0)); + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->odd), + bpp | ((clip)? PLANB_CLIPMASK: 0)); + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->saa_addr), + SAA7196_OUTPIX); + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->saa_regval), + saa_regs[norm][SAA7196_OUTPIX]); + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->saa_addr), + SAA7196_HFILT); + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->saa_regval), + saa_regs[norm][SAA7196_HFILT]); + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->saa_addr), + SAA7196_OUTLINE); + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->saa_regval), + saa_regs[norm][SAA7196_OUTLINE]); + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->saa_addr), + SAA7196_VYP); + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->saa_regval), + saa_regs[norm][SAA7196_VYP]); + return c1; +} + +/******************************/ +/* misc. supporting functions */ +/******************************/ + +static inline void planb_lock(struct planb *pb) +{ + down(&pb->lock); +} + +static inline void planb_unlock(struct planb *pb) +{ + up(&pb->lock); +} + +/***************/ +/* Driver Core */ +/***************/ + +static int planb_prepare_open(struct planb *pb) +{ + int i, size; + + /* allocate memory for two plus alpha command buffers (size: max lines, + plus 40 commands handling, plus 1 alignment), plus dummy command buf, + plus clipmask buffer, plus frame grabbing status */ + size = (pb->tab_size*(2+MAX_GBUFFERS*TAB_FACTOR)+1+MAX_GBUFFERS + * PLANB_DUMMY)*sizeof(struct dbdma_cmd) + +(PLANB_MAXLINES*((PLANB_MAXPIXELS+7)& ~7))/8 + +MAX_GBUFFERS*sizeof(unsigned int); + if ((pb->priv_space = kmalloc (size, GFP_KERNEL)) == 0) + return -ENOMEM; + memset ((void *) pb->priv_space, 0, size); + pb->overlay_last1 = pb->ch1_cmd = (volatile struct dbdma_cmd *) + DBDMA_ALIGN (pb->priv_space); + pb->overlay_last2 = pb->ch2_cmd = pb->ch1_cmd + pb->tab_size; + pb->ch1_cmd_phys = virt_to_bus(pb->ch1_cmd); + pb->cap_cmd[0] = pb->ch2_cmd + pb->tab_size; + pb->pre_cmd[0] = pb->cap_cmd[0] + pb->tab_size * TAB_FACTOR; + for (i = 1; i < MAX_GBUFFERS; i++) { + pb->cap_cmd[i] = pb->pre_cmd[i-1] + PLANB_DUMMY; + pb->pre_cmd[i] = pb->cap_cmd[i] + pb->tab_size * TAB_FACTOR; + } + pb->frame_stat=(volatile unsigned int *)(pb->pre_cmd[MAX_GBUFFERS-1] + + PLANB_DUMMY); + pb->mask = (unsigned char *)(pb->frame_stat+MAX_GBUFFERS); + + pb->rawbuf = NULL; + pb->rawbuf_size = 0; + pb->grabbing = 0; + for (i = 0; i < MAX_GBUFFERS; i++) { + pb->frame_stat[i] = GBUFFER_UNUSED; + pb->gwidth[i] = 0; + pb->gheight[i] = 0; + pb->gfmt[i] = 0; + pb->gnorm_switch[i] = 0; +#ifndef PLANB_GSCANLINE + pb->lsize[i] = 0; + pb->lnum[i] = 0; +#endif /* PLANB_GSCANLINE */ + } + pb->gcount = 0; + pb->suspend = 0; + pb->last_fr = -999; + pb->prev_last_fr = -999; + + /* Reset DMA controllers */ + planb_dbdma_stop(&pb->planb_base->ch2); + planb_dbdma_stop(&pb->planb_base->ch1); + + return 0; +} + +static void planb_prepare_close(struct planb *pb) +{ + int i; + + /* make sure the dma's are idle */ + planb_dbdma_stop(&pb->planb_base->ch2); + planb_dbdma_stop(&pb->planb_base->ch1); + /* free kernel memory of command buffers */ + if(pb->priv_space != 0) { + kfree (pb->priv_space); + pb->priv_space = 0; + pb->cmd_buff_inited = 0; + } + if(pb->rawbuf) { + for (i = 0; i < pb->rawbuf_size; i++) { + mem_map_unreserve(virt_to_page(pb->rawbuf[i])); + free_pages((unsigned long)pb->rawbuf[i], 0); + } + kfree(pb->rawbuf); + } + pb->rawbuf = NULL; +} + +/*****************************/ +/* overlay support functions */ +/*****************************/ + +static void overlay_start(struct planb *pb) +{ + + DEBUG("PlanB: overlay_start()\n"); + + if(ACTIVE & in_le32(&pb->planb_base->ch1.status)) { + + DEBUG("PlanB: presumably, grabbing is in progress...\n"); + + planb_dbdma_stop(&pb->planb_base->ch2); + out_le32 (&pb->planb_base->ch2.cmdptr, + virt_to_bus(pb->ch2_cmd)); + planb_dbdma_restart(&pb->planb_base->ch2); + st_le16 (&pb->ch1_cmd->command, DBDMA_NOP); + tab_cmd_dbdma(pb->last_cmd[pb->last_fr], + DBDMA_NOP | BR_ALWAYS, + virt_to_bus(pb->ch1_cmd)); + eieio(); + pb->prev_last_fr = pb->last_fr; + pb->last_fr = -2; + if(!(ACTIVE & in_le32(&pb->planb_base->ch1.status))) { + IDEBUG("PlanB: became inactive " + "in the mean time... reactivating\n"); + planb_dbdma_stop(&pb->planb_base->ch1); + out_le32 (&pb->planb_base->ch1.cmdptr, + virt_to_bus(pb->ch1_cmd)); + planb_dbdma_restart(&pb->planb_base->ch1); + } + } else { + + DEBUG("PlanB: currently idle, so can do whatever\n"); + + planb_dbdma_stop(&pb->planb_base->ch2); + planb_dbdma_stop(&pb->planb_base->ch1); + st_le32 (&pb->planb_base->ch2.cmdptr, + virt_to_bus(pb->ch2_cmd)); + st_le32 (&pb->planb_base->ch1.cmdptr, + virt_to_bus(pb->ch1_cmd)); + out_le16 (&pb->ch1_cmd->command, DBDMA_NOP); + planb_dbdma_restart(&pb->planb_base->ch2); + planb_dbdma_restart(&pb->planb_base->ch1); + pb->last_fr = -1; + } + return; +} + +static void overlay_stop(struct planb *pb) +{ + DEBUG("PlanB: overlay_stop()\n"); + + if(pb->last_fr == -1) { + + DEBUG("PlanB: no grabbing, it seems...\n"); + + planb_dbdma_stop(&pb->planb_base->ch2); + planb_dbdma_stop(&pb->planb_base->ch1); + pb->last_fr = -999; + } else if(pb->last_fr == -2) { + unsigned int cmd_dep; + tab_cmd_dbdma(pb->cap_cmd[pb->prev_last_fr], DBDMA_STOP, 0); + eieio(); + cmd_dep = (unsigned int)in_le32(&pb->overlay_last1->cmd_dep); + if(overlay_is_active(pb)) { + + DEBUG("PlanB: overlay is currently active\n"); + + planb_dbdma_stop(&pb->planb_base->ch2); + planb_dbdma_stop(&pb->planb_base->ch1); + if(cmd_dep != pb->ch1_cmd_phys) { + out_le32(&pb->planb_base->ch1.cmdptr, + virt_to_bus(pb->overlay_last1)); + planb_dbdma_restart(&pb->planb_base->ch1); + } + } + pb->last_fr = pb->prev_last_fr; + pb->prev_last_fr = -999; + } + return; +} + +static void suspend_overlay(struct planb *pb) +{ + int fr = -1; + struct dbdma_cmd last; + + DEBUG("PlanB: suspend_overlay: %d\n", pb->suspend); + + if(pb->suspend++) + return; + if(ACTIVE & in_le32(&pb->planb_base->ch1.status)) { + if(pb->last_fr == -2) { + fr = pb->prev_last_fr; + memcpy(&last, (void*)pb->last_cmd[fr], sizeof(last)); + tab_cmd_dbdma(pb->last_cmd[fr], DBDMA_STOP, 0); + } + if(overlay_is_active(pb)) { + planb_dbdma_stop(&pb->planb_base->ch2); + planb_dbdma_stop(&pb->planb_base->ch1); + pb->suspended.overlay = 1; + pb->suspended.frame = fr; + memcpy(&pb->suspended.cmd, &last, sizeof(last)); + return; + } + } + pb->suspended.overlay = 0; + pb->suspended.frame = fr; + memcpy(&pb->suspended.cmd, &last, sizeof(last)); + return; +} + +static void resume_overlay(struct planb *pb) +{ + + DEBUG("PlanB: resume_overlay: %d\n", pb->suspend); + + if(pb->suspend > 1) + return; + if(pb->suspended.frame != -1) { + memcpy((void*)pb->last_cmd[pb->suspended.frame], + &pb->suspended.cmd, sizeof(pb->suspended.cmd)); + } + if(ACTIVE & in_le32(&pb->planb_base->ch1.status)) { + goto finish; + } + if(pb->suspended.overlay) { + + DEBUG("PlanB: overlay being resumed\n"); + + st_le16 (&pb->ch1_cmd->command, DBDMA_NOP); + st_le16 (&pb->ch2_cmd->command, DBDMA_NOP); + /* Set command buffer addresses */ + st_le32(&pb->planb_base->ch1.cmdptr, + virt_to_bus(pb->overlay_last1)); + out_le32(&pb->planb_base->ch2.cmdptr, + virt_to_bus(pb->overlay_last2)); + /* Start the DMA controller */ + out_le32 (&pb->planb_base->ch2.control, + PLANB_CLR(PAUSE) | PLANB_SET(RUN|WAKE)); + out_le32 (&pb->planb_base->ch1.control, + PLANB_CLR(PAUSE) | PLANB_SET(RUN|WAKE)); + } else if(pb->suspended.frame != -1) { + out_le32(&pb->planb_base->ch1.cmdptr, + virt_to_bus(pb->last_cmd[pb->suspended.frame])); + out_le32 (&pb->planb_base->ch1.control, + PLANB_CLR(PAUSE) | PLANB_SET(RUN|WAKE)); + } + +finish: + pb->suspend--; + wake_up_interruptible(&pb->suspendq); +} + +static void add_clip(struct planb *pb, struct video_clip *clip) +{ + volatile unsigned char *base; + int xc = clip->x, yc = clip->y; + int wc = clip->width, hc = clip->height; + int ww = pb->win.width, hw = pb->win.height; + int x, y, xtmp1, xtmp2; + + DEBUG("PlanB: clip %dx%d+%d+%d\n", wc, hc, xc, yc); + + if(xc < 0) { + wc += xc; + xc = 0; + } + if(yc < 0) { + hc += yc; + yc = 0; + } + if(xc + wc > ww) + wc = ww - xc; + if(wc <= 0) /* Nothing to do */ + return; + if(yc + hc > hw) + hc = hw - yc; + + for (y = yc; y < yc+hc; y++) { + xtmp1=xc>>3; + xtmp2=(xc+wc)>>3; + base = pb->mask + y*96; + if(xc != 0 || wc >= 8) + *(base + xtmp1) &= (unsigned char)(0x00ff & + (0xff00 >> (xc&7))); + for (x = xtmp1 + 1; x < xtmp2; x++) { + *(base + x) = 0; + } + if(xc < (ww & ~0x7)) + *(base + xtmp2) &= (unsigned char)(0x00ff >> + ((xc+wc) & 7)); + } + + return; +} + +static void fill_cmd_buff(struct planb *pb) +{ + int restore = 0; + volatile struct dbdma_cmd last; + + DEBUG("PlanB: fill_cmd_buff()\n"); + + if(pb->overlay_last1 != pb->ch1_cmd) { + restore = 1; + last = *(pb->overlay_last1); + } + memset ((void *) pb->ch1_cmd, 0, 2 * pb->tab_size + * sizeof(struct dbdma_cmd)); + cmd_buff (pb); + if(restore) + *(pb->overlay_last1) = last; + if(pb->suspended.overlay) { + unsigned long jump_addr = in_le32(&pb->overlay_last1->cmd_dep); + if(jump_addr != pb->ch1_cmd_phys) { + int i; + + DEBUG("PlanB: adjusting ch1's jump address\n"); + + for(i = 0; i < MAX_GBUFFERS; i++) { + if(pb->need_pre_capture[i]) { + if(jump_addr == virt_to_bus(pb->pre_cmd[i])) + goto found; + } else { + if(jump_addr == virt_to_bus(pb->cap_cmd[i])) + goto found; + } + } + + DEBUG("PlanB: not found...\n"); + + goto out; +found: + if(pb->need_pre_capture[i]) + out_le32(&pb->pre_cmd[i]->phy_addr, + virt_to_bus(pb->overlay_last1)); + else + out_le32(&pb->cap_cmd[i]->phy_addr, + virt_to_bus(pb->overlay_last1)); + } + } +out: + pb->cmd_buff_inited = 1; + + return; +} + +static void cmd_buff(struct planb *pb) +{ + int i, bpp, count, nlines, stepsize, interlace; + unsigned long base, jump, addr_com, addr_dep; + volatile struct dbdma_cmd *c1 = pb->ch1_cmd; + volatile struct dbdma_cmd *c2 = pb->ch2_cmd; + + interlace = pb->win.interlace; + bpp = pb->win.bpp; + count = (bpp * ((pb->win.x + pb->win.width > pb->win.swidth) ? + (pb->win.swidth - pb->win.x) : pb->win.width)); + nlines = ((pb->win.y + pb->win.height > pb->win.sheight) ? + (pb->win.sheight - pb->win.y) : pb->win.height); + + /* Do video in: */ + + /* Preamble commands: */ + addr_com = virt_to_bus(c1); + addr_dep = virt_to_bus(&c1->cmd_dep); + tab_cmd_dbdma(c1++, DBDMA_NOP, 0); + jump = virt_to_bus(c1+16); /* 14 by cmd_geo_setup() and 2 for padding */ + if((c1 = cmd_geo_setup(c1, pb->win.width, pb->win.height, interlace, + bpp, 1, pb)) == NULL) { + printk(KERN_WARNING "PlanB: encountered serious problems\n"); + tab_cmd_dbdma(pb->ch1_cmd + 1, DBDMA_STOP, 0); + tab_cmd_dbdma(pb->ch2_cmd + 1, DBDMA_STOP, 0); + return; + } + tab_cmd_store(c1++, addr_com, (unsigned)(DBDMA_NOP | BR_ALWAYS) << 16); + tab_cmd_store(c1++, addr_dep, jump); + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->ch1.wait_sel), + PLANB_SET(FIELD_SYNC)); + /* (1) wait for field sync to be set */ + tab_cmd_dbdma(c1++, DBDMA_NOP | WAIT_IFCLR, 0); + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->ch1.br_sel), + PLANB_SET(ODD_FIELD)); + /* wait for field sync to be cleared */ + tab_cmd_dbdma(c1++, DBDMA_NOP | WAIT_IFSET, 0); + /* if not odd field, wait until field sync is set again */ + tab_cmd_dbdma(c1, DBDMA_NOP | BR_IFSET, virt_to_bus(c1-3)); c1++; + /* assert ch_sync to ch2 */ + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->ch2.control), + PLANB_SET(CH_SYNC)); + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->ch1.br_sel), + PLANB_SET(DMA_ABORT)); + + base = (pb->frame_buffer_phys + pb->offset + pb->win.y * (pb->win.bpl + + pb->win.pad) + pb->win.x * bpp); + + if (interlace) { + stepsize = 2; + jump = virt_to_bus(c1 + (nlines + 1) / 2); + } else { + stepsize = 1; + jump = virt_to_bus(c1 + nlines); + } + + /* even field data: */ + for (i=0; i < nlines; i += stepsize, c1++) + tab_cmd_gen(c1, INPUT_MORE | KEY_STREAM0 | BR_IFSET, + count, base + i * (pb->win.bpl + pb->win.pad), jump); + + /* For non-interlaced, we use even fields only */ + if (!interlace) + goto cmd_tab_data_end; + + /* Resync to odd field */ + /* (2) wait for field sync to be set */ + tab_cmd_dbdma(c1++, DBDMA_NOP | WAIT_IFCLR, 0); + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->ch1.br_sel), + PLANB_SET(ODD_FIELD)); + /* wait for field sync to be cleared */ + tab_cmd_dbdma(c1++, DBDMA_NOP | WAIT_IFSET, 0); + /* if not odd field, wait until field sync is set again */ + tab_cmd_dbdma(c1, DBDMA_NOP | BR_IFCLR, virt_to_bus(c1-3)); c1++; + /* assert ch_sync to ch2 */ + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->ch2.control), + PLANB_SET(CH_SYNC)); + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->ch1.br_sel), + PLANB_SET(DMA_ABORT)); + + /* odd field data: */ + jump = virt_to_bus(c1 + nlines / 2); + for (i=1; i < nlines; i += stepsize, c1++) + tab_cmd_gen(c1, INPUT_MORE | KEY_STREAM0 | BR_IFSET, count, + base + i * (pb->win.bpl + pb->win.pad), jump); + + /* And jump back to the start */ +cmd_tab_data_end: + pb->overlay_last1 = c1; /* keep a pointer to the last command */ + tab_cmd_dbdma(c1, DBDMA_NOP | BR_ALWAYS, virt_to_bus(pb->ch1_cmd)); + + /* Clipmask command buffer */ + + /* Preamble commands: */ + tab_cmd_dbdma(c2++, DBDMA_NOP, 0); + tab_cmd_store(c2++, (unsigned)(&pb->planb_base_phys->ch2.wait_sel), + PLANB_SET(CH_SYNC)); + /* wait until ch1 asserts ch_sync */ + tab_cmd_dbdma(c2++, DBDMA_NOP | WAIT_IFCLR, 0); + /* clear ch_sync asserted by ch1 */ + tab_cmd_store(c2++, (unsigned)(&pb->planb_base_phys->ch2.control), + PLANB_CLR(CH_SYNC)); + tab_cmd_store(c2++, (unsigned)(&pb->planb_base_phys->ch2.wait_sel), + PLANB_SET(FIELD_SYNC)); + tab_cmd_store(c2++, (unsigned)(&pb->planb_base_phys->ch2.br_sel), + PLANB_SET(ODD_FIELD)); + + /* jump to end of even field if appropriate */ + /* this points to (interlace)? pos. C: pos. B */ + jump = (interlace) ? virt_to_bus(c2 + (nlines + 1) / 2 + 2): + virt_to_bus(c2 + nlines + 2); + /* if odd field, skip over to odd field clipmasking */ + tab_cmd_dbdma(c2++, DBDMA_NOP | BR_IFSET, jump); + + /* even field mask: */ + tab_cmd_store(c2++, (unsigned)(&pb->planb_base_phys->ch2.br_sel), + PLANB_SET(DMA_ABORT)); + /* this points to pos. B */ + jump = (interlace) ? virt_to_bus(c2 + nlines + 1): + virt_to_bus(c2 + nlines); + base = virt_to_bus(pb->mask); + for (i=0; i < nlines; i += stepsize, c2++) + tab_cmd_gen(c2, OUTPUT_MORE | KEY_STREAM0 | BR_IFSET, 96, + base + i * 96, jump); + + /* For non-interlaced, we use only even fields */ + if(!interlace) + goto cmd_tab_mask_end; + + /* odd field mask: */ +/* C */ tab_cmd_store(c2++, (unsigned)(&pb->planb_base_phys->ch2.br_sel), + PLANB_SET(DMA_ABORT)); + /* this points to pos. B */ + jump = virt_to_bus(c2 + nlines / 2); + base = virt_to_bus(pb->mask); + for (i=1; i < nlines; i += 2, c2++) /* abort if set */ + tab_cmd_gen(c2, OUTPUT_MORE | KEY_STREAM0 | BR_IFSET, 96, + base + i * 96, jump); + + /* Inform channel 1 and jump back to start */ +cmd_tab_mask_end: + /* ok, I just realized this is kind of flawed. */ + /* this part is reached only after odd field clipmasking. */ + /* wanna clean up? */ + /* wait for field sync to be set */ + /* corresponds to fsync (1) of ch1 */ +/* B */ tab_cmd_dbdma(c2++, DBDMA_NOP | WAIT_IFCLR, 0); + /* restart ch1, meant to clear any dead bit or something */ + tab_cmd_store(c2++, (unsigned)(&pb->planb_base_phys->ch1.control), + PLANB_CLR(RUN)); + tab_cmd_store(c2++, (unsigned)(&pb->planb_base_phys->ch1.control), + PLANB_SET(RUN)); + pb->overlay_last2 = c2; /* keep a pointer to the last command */ + /* start over even field clipmasking */ + tab_cmd_dbdma(c2, DBDMA_NOP | BR_ALWAYS, virt_to_bus(pb->ch2_cmd)); + + eieio(); + return; +} + +/*********************************/ +/* grabdisplay support functions */ +/*********************************/ + +static int palette2fmt[] = { + 0, + PLANB_GRAY, + 0, + 0, + 0, + PLANB_COLOUR32, + PLANB_COLOUR15, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, +}; + +#define PLANB_PALETTE_MAX 15 + +static inline int overlay_is_active(struct planb *pb) +{ + unsigned int size = pb->tab_size * sizeof(struct dbdma_cmd); + unsigned int caddr = (unsigned)in_le32(&pb->planb_base->ch1.cmdptr); + + return (in_le32(&pb->overlay_last1->cmd_dep) == pb->ch1_cmd_phys) + && (caddr < (pb->ch1_cmd_phys + size)) + && (caddr >= (unsigned)pb->ch1_cmd_phys); +} + +static int vgrab(struct planb *pb, struct video_mmap *mp) +{ + unsigned int fr = mp->frame; + unsigned int format; + + if(pb->rawbuf==NULL) { + int err; + if((err=grabbuf_alloc(pb))) + return err; + } + + IDEBUG("PlanB: grab %d: %dx%d(%u)\n", pb->grabbing, + mp->width, mp->height, fr); + + if(pb->grabbing >= MAX_GBUFFERS) + return -ENOBUFS; + if(fr > (MAX_GBUFFERS - 1) || fr < 0) + return -EINVAL; + if(mp->height <= 0 || mp->width <= 0) + return -EINVAL; + if(mp->format < 0 || mp->format >= PLANB_PALETTE_MAX) + return -EINVAL; + if((format = palette2fmt[mp->format]) == 0) + return -EINVAL; + if (mp->height * mp->width * format > PLANB_MAX_FBUF) /* format = bpp */ + return -EINVAL; + + planb_lock(pb); + if(mp->width != pb->gwidth[fr] || mp->height != pb->gheight[fr] || + format != pb->gfmt[fr] || (pb->gnorm_switch[fr])) { + int i; +#ifndef PLANB_GSCANLINE + unsigned int osize = pb->gwidth[fr] * pb->gheight[fr] + * pb->gfmt[fr]; + unsigned int nsize = mp->width * mp->height * format; +#endif + + IDEBUG("PlanB: gwidth = %d, gheight = %d, mp->format = %u\n", + mp->width, mp->height, mp->format); + +#ifndef PLANB_GSCANLINE + if(pb->gnorm_switch[fr]) + nsize = 0; + if (nsize < osize) { + for(i = pb->gbuf_idx[fr]; osize > 0; i++) { + memset((void *)pb->rawbuf[i], 0, PAGE_SIZE); + osize -= PAGE_SIZE; + } + } + for(i = pb->l_fr_addr_idx[fr]; i < pb->l_fr_addr_idx[fr] + + pb->lnum[fr]; i++) + memset((void *)pb->rawbuf[i], 0, PAGE_SIZE); +#else +/* XXX TODO */ +/* + if(pb->gnorm_switch[fr]) + memset((void *)pb->gbuffer[fr], 0, + pb->gbytes_per_line * pb->gheight[fr]); + else { + if(mp-> + for(i = 0; i < pb->gheight[fr]; i++) { + memset((void *)(pb->gbuffer[fr] + + pb->gbytes_per_line * i + } + } +*/ +#endif + pb->gwidth[fr] = mp->width; + pb->gheight[fr] = mp->height; + pb->gfmt[fr] = format; + pb->last_cmd[fr] = setup_grab_cmd(fr, pb); + planb_pre_capture(fr, pb->gfmt[fr], pb); /* gfmt = bpp */ + pb->need_pre_capture[fr] = 1; + pb->gnorm_switch[fr] = 0; + } else + pb->need_pre_capture[fr] = 0; + pb->frame_stat[fr] = GBUFFER_GRABBING; + if(!(ACTIVE & in_le32(&pb->planb_base->ch1.status))) { + + IDEBUG("PlanB: ch1 inactive, initiating grabbing\n"); + + planb_dbdma_stop(&pb->planb_base->ch1); + if(pb->need_pre_capture[fr]) { + + IDEBUG("PlanB: padding pre-capture sequence\n"); + + out_le32 (&pb->planb_base->ch1.cmdptr, + virt_to_bus(pb->pre_cmd[fr])); + } else { + tab_cmd_dbdma(pb->last_cmd[fr], DBDMA_STOP, 0); + tab_cmd_dbdma(pb->cap_cmd[fr], DBDMA_NOP, 0); + /* let's be on the safe side. here is not timing critical. */ + tab_cmd_dbdma((pb->cap_cmd[fr] + 1), DBDMA_NOP, 0); + out_le32 (&pb->planb_base->ch1.cmdptr, + virt_to_bus(pb->cap_cmd[fr])); + } + planb_dbdma_restart(&pb->planb_base->ch1); + pb->last_fr = fr; + } else { + int i; + + IDEBUG("PlanB: ch1 active, grabbing being queued\n"); + + if((pb->last_fr == -1) || ((pb->last_fr == -2) && + overlay_is_active(pb))) { + + IDEBUG("PlanB: overlay is active, grabbing defered\n"); + + tab_cmd_dbdma(pb->last_cmd[fr], + DBDMA_NOP | BR_ALWAYS, + virt_to_bus(pb->ch1_cmd)); + if(pb->need_pre_capture[fr]) { + + IDEBUG("PlanB: padding pre-capture sequence\n"); + + tab_cmd_store(pb->pre_cmd[fr], + virt_to_bus(&pb->overlay_last1->cmd_dep), + virt_to_bus(pb->ch1_cmd)); + eieio(); + out_le32 (&pb->overlay_last1->cmd_dep, + virt_to_bus(pb->pre_cmd[fr])); + } else { + tab_cmd_store(pb->cap_cmd[fr], + virt_to_bus(&pb->overlay_last1->cmd_dep), + virt_to_bus(pb->ch1_cmd)); + tab_cmd_dbdma((pb->cap_cmd[fr] + 1), + DBDMA_NOP, 0); + eieio(); + out_le32 (&pb->overlay_last1->cmd_dep, + virt_to_bus(pb->cap_cmd[fr])); + } + for(i = 0; overlay_is_active(pb) && i < 999; i++) + IDEBUG("PlanB: waiting for overlay done\n"); + tab_cmd_dbdma(pb->ch1_cmd, DBDMA_NOP, 0); + pb->prev_last_fr = fr; + pb->last_fr = -2; + } else if(pb->last_fr == -2) { + + IDEBUG("PlanB: mixed mode detected, grabbing" + " will be done before activating overlay\n"); + + tab_cmd_dbdma(pb->ch1_cmd, DBDMA_NOP, 0); + if(pb->need_pre_capture[fr]) { + + IDEBUG("PlanB: padding pre-capture sequence\n"); + + tab_cmd_dbdma(pb->last_cmd[pb->prev_last_fr], + DBDMA_NOP | BR_ALWAYS, + virt_to_bus(pb->pre_cmd[fr])); + eieio(); + } else { + tab_cmd_dbdma(pb->cap_cmd[fr], DBDMA_NOP, 0); + if(pb->gwidth[pb->prev_last_fr] != + pb->gwidth[fr] + || pb->gheight[pb->prev_last_fr] != + pb->gheight[fr] + || pb->gfmt[pb->prev_last_fr] != + pb->gfmt[fr]) + tab_cmd_dbdma((pb->cap_cmd[fr] + 1), + DBDMA_NOP, 0); + else + tab_cmd_dbdma((pb->cap_cmd[fr] + 1), + DBDMA_NOP | BR_ALWAYS, + virt_to_bus(pb->cap_cmd[fr] + 16)); + tab_cmd_dbdma(pb->last_cmd[pb->prev_last_fr], + DBDMA_NOP | BR_ALWAYS, + virt_to_bus(pb->cap_cmd[fr])); + eieio(); + } + tab_cmd_dbdma(pb->last_cmd[fr], + DBDMA_NOP | BR_ALWAYS, + virt_to_bus(pb->ch1_cmd)); + eieio(); + pb->prev_last_fr = fr; + pb->last_fr = -2; + } else { + + IDEBUG("PlanB: active grabbing session detected\n"); + + if(pb->need_pre_capture[fr]) { + + IDEBUG("PlanB: padding pre-capture sequence\n"); + + tab_cmd_dbdma(pb->last_cmd[pb->last_fr], + DBDMA_NOP | BR_ALWAYS, + virt_to_bus(pb->pre_cmd[fr])); + eieio(); + } else { + tab_cmd_dbdma(pb->last_cmd[fr], DBDMA_STOP, 0); + tab_cmd_dbdma(pb->cap_cmd[fr], DBDMA_NOP, 0); + if(pb->gwidth[pb->last_fr] != pb->gwidth[fr] + || pb->gheight[pb->last_fr] != + pb->gheight[fr] + || pb->gfmt[pb->last_fr] != + pb->gfmt[fr]) + tab_cmd_dbdma((pb->cap_cmd[fr] + 1), + DBDMA_NOP, 0); + else + tab_cmd_dbdma((pb->cap_cmd[fr] + 1), + DBDMA_NOP | BR_ALWAYS, + virt_to_bus(pb->cap_cmd[fr] + 16)); + tab_cmd_dbdma(pb->last_cmd[pb->last_fr], + DBDMA_NOP | BR_ALWAYS, + virt_to_bus(pb->cap_cmd[fr])); + eieio(); + } + pb->last_fr = fr; + } + if(!(ACTIVE & in_le32(&pb->planb_base->ch1.status))) { + + IDEBUG("PlanB: became inactive in the mean time..." + "reactivating\n"); + + planb_dbdma_stop(&pb->planb_base->ch1); + out_le32 (&pb->planb_base->ch1.cmdptr, + virt_to_bus(pb->cap_cmd[fr])); + planb_dbdma_restart(&pb->planb_base->ch1); + } + } + pb->grabbing++; + planb_unlock(pb); + + return 0; +} + +static void planb_pre_capture(int fr, int bpp, struct planb *pb) +{ + volatile struct dbdma_cmd *c1 = pb->pre_cmd[fr]; + int interlace = (pb->gheight[fr] > pb->maxlines/2)? 1: 0; + + tab_cmd_dbdma(c1++, DBDMA_NOP, 0); + if((c1 = cmd_geo_setup(c1, pb->gwidth[fr], pb->gheight[fr], interlace, + bpp, 0, pb)) == NULL) { + printk(KERN_WARNING "PlanB: encountered some problems\n"); + tab_cmd_dbdma(pb->pre_cmd[fr] + 1, DBDMA_STOP, 0); + return; + } + /* Sync to even field */ + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->ch1.wait_sel), + PLANB_SET(FIELD_SYNC)); + tab_cmd_dbdma(c1++, DBDMA_NOP | WAIT_IFCLR, 0); + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->ch1.br_sel), + PLANB_SET(ODD_FIELD)); + tab_cmd_dbdma(c1++, DBDMA_NOP | WAIT_IFSET, 0); + tab_cmd_dbdma(c1, DBDMA_NOP | BR_IFSET, virt_to_bus(c1-3)); c1++; + tab_cmd_dbdma(c1++, DBDMA_NOP | INTR_ALWAYS, 0); + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->ch1.br_sel), + PLANB_SET(DMA_ABORT)); + /* For non-interlaced, we use even fields only */ + if (pb->gheight[fr] <= pb->maxlines/2) + goto cmd_tab_data_end; + /* Sync to odd field */ + tab_cmd_dbdma(c1++, DBDMA_NOP | WAIT_IFCLR, 0); + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->ch1.br_sel), + PLANB_SET(ODD_FIELD)); + tab_cmd_dbdma(c1++, DBDMA_NOP | WAIT_IFSET, 0); + tab_cmd_dbdma(c1, DBDMA_NOP | BR_IFCLR, virt_to_bus(c1-3)); c1++; + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->ch1.br_sel), + PLANB_SET(DMA_ABORT)); +cmd_tab_data_end: + tab_cmd_dbdma(c1, DBDMA_NOP | BR_ALWAYS, virt_to_bus(pb->cap_cmd[fr])); + + eieio(); +} + +static volatile struct dbdma_cmd *setup_grab_cmd(int fr, struct planb *pb) +{ + int i, bpp, count, nlines, stepsize, interlace; +#ifdef PLANB_GSCANLINE + int scanline; +#else + int nlpp, leftover1; + unsigned long base; +#endif + unsigned long jump; + int pagei; + volatile struct dbdma_cmd *c1; + volatile struct dbdma_cmd *jump_addr; + + c1 = pb->cap_cmd[fr]; + interlace = (pb->gheight[fr] > pb->maxlines/2)? 1: 0; + bpp = pb->gfmt[fr]; /* gfmt = bpp */ + count = bpp * pb->gwidth[fr]; + nlines = pb->gheight[fr]; +#ifdef PLANB_GSCANLINE + scanline = pb->gbytes_per_line; +#else + pb->lsize[fr] = count; + pb->lnum[fr] = 0; +#endif + + /* Do video in: */ + + /* Preamble commands: */ + tab_cmd_dbdma(c1++, DBDMA_NOP, 0); + tab_cmd_dbdma(c1, DBDMA_NOP | BR_ALWAYS, virt_to_bus(c1 + 16)); c1++; + if((c1 = cmd_geo_setup(c1, pb->gwidth[fr], pb->gheight[fr], interlace, + bpp, 0, pb)) == NULL) { + printk(KERN_WARNING "PlanB: encountered serious problems\n"); + tab_cmd_dbdma(pb->cap_cmd[fr] + 1, DBDMA_STOP, 0); + return (pb->cap_cmd[fr] + 2); + } + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->ch1.wait_sel), + PLANB_SET(FIELD_SYNC)); + tab_cmd_dbdma(c1++, DBDMA_NOP | WAIT_IFCLR, 0); + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->ch1.br_sel), + PLANB_SET(ODD_FIELD)); + tab_cmd_dbdma(c1++, DBDMA_NOP | WAIT_IFSET, 0); + tab_cmd_dbdma(c1, DBDMA_NOP | BR_IFSET, virt_to_bus(c1-3)); c1++; + tab_cmd_dbdma(c1++, DBDMA_NOP | INTR_ALWAYS, 0); + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->ch1.br_sel), + PLANB_SET(DMA_ABORT)); + + if (interlace) { + stepsize = 2; + jump_addr = c1 + TAB_FACTOR * (nlines + 1) / 2; + } else { + stepsize = 1; + jump_addr = c1 + TAB_FACTOR * nlines; + } + jump = virt_to_bus(jump_addr); + + /* even field data: */ + + pagei = pb->gbuf_idx[fr]; +#ifdef PLANB_GSCANLINE + for (i = 0; i < nlines; i += stepsize) { + tab_cmd_gen(c1++, INPUT_MORE | BR_IFSET, count, + virt_to_bus(pb->rawbuf[pagei + + i * scanline / PAGE_SIZE]), jump); + } +#else + i = 0; + leftover1 = 0; + do { + int j; + + base = virt_to_bus(pb->rawbuf[pagei]); + nlpp = (PAGE_SIZE - leftover1) / count / stepsize; + for(j = 0; j < nlpp && i < nlines; j++, i += stepsize, c1++) + tab_cmd_gen(c1, INPUT_MORE | KEY_STREAM0 | BR_IFSET, + count, base + count * j * stepsize + leftover1, jump); + if(i < nlines) { + int lov0 = PAGE_SIZE - count * nlpp * stepsize - leftover1; + + if(lov0 == 0) + leftover1 = 0; + else { + if(lov0 >= count) { + tab_cmd_gen(c1++, INPUT_MORE | BR_IFSET, count, base + + count * nlpp * stepsize + leftover1, jump); + } else { + pb->l_to_addr[fr][pb->lnum[fr]] = pb->rawbuf[pagei] + + count * nlpp * stepsize + leftover1; + pb->l_to_next_idx[fr][pb->lnum[fr]] = pagei + 1; + pb->l_to_next_size[fr][pb->lnum[fr]] = count - lov0; + tab_cmd_gen(c1++, INPUT_MORE | BR_IFSET, count, + virt_to_bus(pb->rawbuf[pb->l_fr_addr_idx[fr] + + pb->lnum[fr]]), jump); + if(++pb->lnum[fr] > MAX_LNUM) + pb->lnum[fr]--; + } + leftover1 = count * stepsize - lov0; + i += stepsize; + } + } + pagei++; + } while(i < nlines); + tab_cmd_dbdma(c1, DBDMA_NOP | BR_ALWAYS, jump); + c1 = jump_addr; +#endif /* PLANB_GSCANLINE */ + + /* For non-interlaced, we use even fields only */ + if (!interlace) + goto cmd_tab_data_end; + + /* Sync to odd field */ + tab_cmd_dbdma(c1++, DBDMA_NOP | WAIT_IFCLR, 0); + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->ch1.br_sel), + PLANB_SET(ODD_FIELD)); + tab_cmd_dbdma(c1++, DBDMA_NOP | WAIT_IFSET, 0); + tab_cmd_dbdma(c1, DBDMA_NOP | BR_IFCLR, virt_to_bus(c1-3)); c1++; + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->ch1.br_sel), + PLANB_SET(DMA_ABORT)); + + /* odd field data: */ + jump_addr = c1 + TAB_FACTOR * nlines / 2; + jump = virt_to_bus(jump_addr); +#ifdef PLANB_GSCANLINE + for (i = 1; i < nlines; i += stepsize) { + tab_cmd_gen(c1++, INPUT_MORE | BR_IFSET, count, + virt_to_bus(pb->rawbuf[pagei + + i * scanline / PAGE_SIZE]), jump); + } +#else + i = 1; + leftover1 = 0; + pagei = pb->gbuf_idx[fr]; + if(nlines <= 1) + goto skip; + do { + int j; + + base = virt_to_bus(pb->rawbuf[pagei]); + nlpp = (PAGE_SIZE - leftover1) / count / stepsize; + if(leftover1 >= count) { + tab_cmd_gen(c1++, INPUT_MORE | KEY_STREAM0 | BR_IFSET, count, + base + leftover1 - count, jump); + i += stepsize; + } + for(j = 0; j < nlpp && i < nlines; j++, i += stepsize, c1++) + tab_cmd_gen(c1, INPUT_MORE | KEY_STREAM0 | BR_IFSET, count, + base + count * (j * stepsize + 1) + leftover1, jump); + if(i < nlines) { + int lov0 = PAGE_SIZE - count * nlpp * stepsize - leftover1; + + if(lov0 == 0) + leftover1 = 0; + else { + if(lov0 > count) { + pb->l_to_addr[fr][pb->lnum[fr]] = pb->rawbuf[pagei] + + count * (nlpp * stepsize + 1) + leftover1; + pb->l_to_next_idx[fr][pb->lnum[fr]] = pagei + 1; + pb->l_to_next_size[fr][pb->lnum[fr]] = count * stepsize + - lov0; + tab_cmd_gen(c1++, INPUT_MORE | BR_IFSET, count, + virt_to_bus(pb->rawbuf[pb->l_fr_addr_idx[fr] + + pb->lnum[fr]]), jump); + if(++pb->lnum[fr] > MAX_LNUM) + pb->lnum[fr]--; + i += stepsize; + } + leftover1 = count * stepsize - lov0; + } + } + pagei++; + } while(i < nlines); +skip: + tab_cmd_dbdma(c1, DBDMA_NOP | BR_ALWAYS, jump); + c1 = jump_addr; +#endif /* PLANB_GSCANLINE */ + +cmd_tab_data_end: + tab_cmd_store(c1++, (unsigned)(&pb->planb_base_phys->intr_stat), + (fr << 9) | PLANB_FRM_IRQ | PLANB_GEN_IRQ); + /* stop it */ + tab_cmd_dbdma(c1, DBDMA_STOP, 0); + + eieio(); + return c1; +} + +static void planb_irq(int irq, void *dev_id, struct pt_regs * regs) +{ + unsigned int stat, astat; + struct planb *pb = (struct planb *)dev_id; + + IDEBUG("PlanB: planb_irq()\n"); + + /* get/clear interrupt status bits */ + eieio(); + stat = in_le32(&pb->planb_base->intr_stat); + astat = stat & pb->intr_mask; + out_le32(&pb->planb_base->intr_stat, PLANB_FRM_IRQ + & ~astat & stat & ~PLANB_GEN_IRQ); + IDEBUG("PlanB: stat = %X, astat = %X\n", stat, astat); + + if(astat & PLANB_FRM_IRQ) { + unsigned int fr = stat >> 9; +#ifndef PLANB_GSCANLINE + int i; +#endif + IDEBUG("PlanB: PLANB_FRM_IRQ\n"); + + pb->gcount++; + + IDEBUG("PlanB: grab %d: fr = %d, gcount = %d\n", + pb->grabbing, fr, pb->gcount); +#ifndef PLANB_GSCANLINE + IDEBUG("PlanB: %d * %d bytes are being copied over\n", + pb->lnum[fr], pb->lsize[fr]); + for(i = 0; i < pb->lnum[fr]; i++) { + int first = pb->lsize[fr] - pb->l_to_next_size[fr][i]; + + memcpy(pb->l_to_addr[fr][i], + pb->rawbuf[pb->l_fr_addr_idx[fr] + i], + first); + memcpy(pb->rawbuf[pb->l_to_next_idx[fr][i]], + pb->rawbuf[pb->l_fr_addr_idx[fr] + i] + first, + pb->l_to_next_size[fr][i]); + } +#endif + pb->frame_stat[fr] = GBUFFER_DONE; + pb->grabbing--; + wake_up_interruptible(&pb->capq); + return; + } + /* incorrect interrupts? */ + pb->intr_mask = PLANB_CLR_IRQ; + out_le32(&pb->planb_base->intr_stat, PLANB_CLR_IRQ); + printk(KERN_ERR "PlanB: IRQ lockup, cleared intrrupts" + " unconditionally\n"); +} + +/******************************* + * Device Operations functions * + *******************************/ + +static int planb_open(struct video_device *dev, int mode) +{ + struct planb *pb = (struct planb *)dev; + + if (pb->user == 0) { + int err; + if((err = planb_prepare_open(pb)) != 0) + return err; + } + pb->user++; + + DEBUG("PlanB: device opened\n"); + + MOD_INC_USE_COUNT; + return 0; +} + +static void planb_close(struct video_device *dev) +{ + struct planb *pb = (struct planb *)dev; + + if(pb->user < 1) /* ??? */ + return; + planb_lock(pb); + if (pb->user == 1) { + if (pb->overlay) { + planb_dbdma_stop(&pb->planb_base->ch2); + planb_dbdma_stop(&pb->planb_base->ch1); + pb->overlay = 0; + } + planb_prepare_close(pb); + } + pb->user--; + planb_unlock(pb); + + DEBUG("PlanB: device closed\n"); + + MOD_DEC_USE_COUNT; +} + +static long planb_read(struct video_device *v, char *buf, unsigned long count, + int nonblock) +{ + DEBUG("planb: read request\n"); + return -EINVAL; +} + +static long planb_write(struct video_device *v, const char *buf, + unsigned long count, int nonblock) +{ + DEBUG("planb: write request\n"); + return -EINVAL; +} + +static int planb_ioctl(struct video_device *dev, unsigned int cmd, void *arg) +{ + struct planb *pb=(struct planb *)dev; + + switch (cmd) + { + case VIDIOCGCAP: + { + struct video_capability b; + + DEBUG("PlanB: IOCTL VIDIOCGCAP\n"); + + strcpy (b.name, pb->video_dev.name); + b.type = VID_TYPE_OVERLAY | VID_TYPE_CLIPPING | + VID_TYPE_FRAMERAM | VID_TYPE_SCALES | + VID_TYPE_CAPTURE; + b.channels = 2; /* composite & svhs */ + b.audios = 0; + b.maxwidth = PLANB_MAXPIXELS; + b.maxheight = PLANB_MAXLINES; + b.minwidth = 32; /* wild guess */ + b.minheight = 32; + if (copy_to_user(arg,&b,sizeof(b))) + return -EFAULT; + return 0; + } + case VIDIOCSFBUF: + { + struct video_buffer v; + unsigned short bpp; + unsigned int fmt; + + DEBUG("PlanB: IOCTL VIDIOCSFBUF\n"); + + if (!capable(CAP_SYS_ADMIN) + || !capable(CAP_SYS_RAWIO)) + return -EPERM; + if (copy_from_user(&v, arg,sizeof(v))) + return -EFAULT; + planb_lock(pb); + switch(v.depth) { + case 8: + bpp = 1; + fmt = PLANB_GRAY; + break; + case 15: + case 16: + bpp = 2; + fmt = PLANB_COLOUR15; + break; + case 24: + case 32: + bpp = 4; + fmt = PLANB_COLOUR32; + break; + default: + planb_unlock(pb); + return -EINVAL; + } + if (bpp * v.width > v.bytesperline) { + planb_unlock(pb); + return -EINVAL; + } + pb->win.bpp = bpp; + pb->win.color_fmt = fmt; + pb->frame_buffer_phys = (unsigned long) v.base; + pb->win.sheight = v.height; + pb->win.swidth = v.width; + pb->picture.depth = pb->win.depth = v.depth; + pb->win.bpl = pb->win.bpp * pb->win.swidth; + pb->win.pad = v.bytesperline - pb->win.bpl; + + DEBUG("PlanB: Display at %p is %d by %d, bytedepth %d," + " bpl %d (+ %d)\n", v.base, v.width,v.height, + pb->win.bpp, pb->win.bpl, pb->win.pad); + + pb->cmd_buff_inited = 0; + if(pb->overlay) { + suspend_overlay(pb); + fill_cmd_buff(pb); + resume_overlay(pb); + } + planb_unlock(pb); + return 0; + } + case VIDIOCGFBUF: + { + struct video_buffer v; + + DEBUG("PlanB: IOCTL VIDIOCGFBUF\n"); + + v.base = (void *)pb->frame_buffer_phys; + v.height = pb->win.sheight; + v.width = pb->win.swidth; + v.depth = pb->win.depth; + v.bytesperline = pb->win.bpl + pb->win.pad; + if (copy_to_user(arg, &v, sizeof(v))) + return -EFAULT; + return 0; + } + case VIDIOCCAPTURE: + { + int i; + + if(copy_from_user(&i, arg, sizeof(i))) + return -EFAULT; + if(i==0) { + DEBUG("PlanB: IOCTL VIDIOCCAPTURE Stop\n"); + + if (!(pb->overlay)) + return 0; + planb_lock(pb); + pb->overlay = 0; + overlay_stop(pb); + planb_unlock(pb); + } else { + DEBUG("PlanB: IOCTL VIDIOCCAPTURE Start\n"); + + if (pb->frame_buffer_phys == 0 || + pb->win.width == 0 || + pb->win.height == 0) + return -EINVAL; + if (pb->overlay) + return 0; + planb_lock(pb); + pb->overlay = 1; + if(!(pb->cmd_buff_inited)) + fill_cmd_buff(pb); + overlay_start(pb); + planb_unlock(pb); + } + return 0; + } + case VIDIOCGCHAN: + { + struct video_channel v; + + DEBUG("PlanB: IOCTL VIDIOCGCHAN\n"); + + if(copy_from_user(&v, arg,sizeof(v))) + return -EFAULT; + v.flags = 0; + v.tuners = 0; + v.type = VIDEO_TYPE_CAMERA; + v.norm = pb->win.norm; + switch(v.channel) + { + case 0: + strcpy(v.name,"Composite"); + break; + case 1: + strcpy(v.name,"SVHS"); + break; + default: + return -EINVAL; + break; + } + if(copy_to_user(arg,&v,sizeof(v))) + return -EFAULT; + + return 0; + } + case VIDIOCSCHAN: + { + struct video_channel v; + + DEBUG("PlanB: IOCTL VIDIOCSCHAN\n"); + + if(copy_from_user(&v, arg, sizeof(v))) + return -EFAULT; + + if (v.norm != pb->win.norm) { + int i, maxlines; + + switch (v.norm) + { + case VIDEO_MODE_PAL: + case VIDEO_MODE_SECAM: + maxlines = PLANB_MAXLINES; + break; + case VIDEO_MODE_NTSC: + maxlines = PLANB_NTSC_MAXLINES; + break; + default: + return -EINVAL; + break; + } + planb_lock(pb); + /* empty the grabbing queue */ + while(pb->grabbing) + interruptible_sleep_on(&pb->capq); + pb->maxlines = maxlines; + pb->win.norm = v.norm; + /* Stop overlay if running */ + suspend_overlay(pb); + for(i = 0; i < MAX_GBUFFERS; i++) + pb->gnorm_switch[i] = 1; + /* I know it's an overkill, but.... */ + fill_cmd_buff(pb); + /* ok, now init it accordingly */ + saa_init_regs (pb); + /* restart overlay if it was running */ + resume_overlay(pb); + planb_unlock(pb); + } + + switch(v.channel) + { + case 0: /* Composite */ + saa_set (SAA7196_IOCC, + ((saa_regs[pb->win.norm][SAA7196_IOCC] & + ~7) | 3), pb); + break; + case 1: /* SVHS */ + saa_set (SAA7196_IOCC, + ((saa_regs[pb->win.norm][SAA7196_IOCC] & + ~7) | 4), pb); + break; + default: + return -EINVAL; + break; + } + + return 0; + } + case VIDIOCGPICT: + { + struct video_picture vp = pb->picture; + + DEBUG("PlanB: IOCTL VIDIOCGPICT\n"); + + switch(pb->win.color_fmt) { + case PLANB_GRAY: + vp.palette = VIDEO_PALETTE_GREY; + case PLANB_COLOUR15: + vp.palette = VIDEO_PALETTE_RGB555; + break; + case PLANB_COLOUR32: + vp.palette = VIDEO_PALETTE_RGB32; + break; + default: + vp.palette = 0; + break; + } + + if(copy_to_user(arg,&vp,sizeof(vp))) + return -EFAULT; + return 0; + } + case VIDIOCSPICT: + { + struct video_picture vp; + + DEBUG("PlanB: IOCTL VIDIOCSPICT\n"); + + if(copy_from_user(&vp,arg,sizeof(vp))) + return -EFAULT; + pb->picture = vp; + /* Should we do sanity checks here? */ + saa_set (SAA7196_BRIG, (unsigned char) + ((pb->picture.brightness) >> 8), pb); + saa_set (SAA7196_HUEC, (unsigned char) + ((pb->picture.hue) >> 8) ^ 0x80, pb); + saa_set (SAA7196_CSAT, (unsigned char) + ((pb->picture.colour) >> 9), pb); + saa_set (SAA7196_CONT, (unsigned char) + ((pb->picture.contrast) >> 9), pb); + + return 0; + } + case VIDIOCSWIN: + { + struct video_window vw; + struct video_clip clip; + int i; + + DEBUG("PlanB: IOCTL VIDIOCSWIN\n"); + + if(copy_from_user(&vw,arg,sizeof(vw))) + return -EFAULT; + + planb_lock(pb); + /* Stop overlay if running */ + suspend_overlay(pb); + pb->win.interlace = (vw.height > pb->maxlines/2)? 1: 0; + if (pb->win.x != vw.x || + pb->win.y != vw.y || + pb->win.width != vw.width || + pb->win.height != vw.height || + !pb->cmd_buff_inited) { + pb->win.x = vw.x; + pb->win.y = vw.y; + pb->win.width = vw.width; + pb->win.height = vw.height; + fill_cmd_buff(pb); + } + /* Reset clip mask */ + memset ((void *) pb->mask, 0xff, (pb->maxlines + * ((PLANB_MAXPIXELS + 7) & ~7)) / 8); + /* Add any clip rects */ + for (i = 0; i < vw.clipcount; i++) { + if (copy_from_user(&clip, vw.clips + i, + sizeof(struct video_clip))) + return -EFAULT; + add_clip(pb, &clip); + } + /* restart overlay if it was running */ + resume_overlay(pb); + planb_unlock(pb); + return 0; + } + case VIDIOCGWIN: + { + struct video_window vw; + + DEBUG("PlanB: IOCTL VIDIOCGWIN\n"); + + vw.x=pb->win.x; + vw.y=pb->win.y; + vw.width=pb->win.width; + vw.height=pb->win.height; + vw.chromakey=0; + vw.flags=0; + if(pb->win.interlace) + vw.flags|=VIDEO_WINDOW_INTERLACE; + if(copy_to_user(arg,&vw,sizeof(vw))) + return -EFAULT; + return 0; + } + case VIDIOCSYNC: { + int i; + + IDEBUG("PlanB: IOCTL VIDIOCSYNC\n"); + + if(copy_from_user((void *)&i,arg,sizeof(int))) + return -EFAULT; + + IDEBUG("PlanB: sync to frame %d\n", i); + + if(i > (MAX_GBUFFERS - 1) || i < 0) + return -EINVAL; +chk_grab: + switch (pb->frame_stat[i]) { + case GBUFFER_UNUSED: + return -EINVAL; + case GBUFFER_GRABBING: + IDEBUG("PlanB: waiting for grab" + " done (%d)\n", i); + interruptible_sleep_on(&pb->capq); + if(signal_pending(current)) + return -EINTR; + goto chk_grab; + case GBUFFER_DONE: + pb->frame_stat[i] = GBUFFER_UNUSED; + break; + } + return 0; + } + + case VIDIOCMCAPTURE: + { + struct video_mmap vm; + volatile unsigned int status; + + IDEBUG("PlanB: IOCTL VIDIOCMCAPTURE\n"); + + if(copy_from_user((void *) &vm,(void *)arg,sizeof(vm))) + return -EFAULT; + status = pb->frame_stat[vm.frame]; + if (status != GBUFFER_UNUSED) + return -EBUSY; + + return vgrab(pb, &vm); + } + + case VIDIOCGMBUF: + { + int i; + struct video_mbuf vm; + + DEBUG("PlanB: IOCTL VIDIOCGMBUF\n"); + + memset(&vm, 0 , sizeof(vm)); + vm.size = PLANB_MAX_FBUF * MAX_GBUFFERS; + vm.frames = MAX_GBUFFERS; + for(i = 0; i= SAA7196_NUMREGS) + return -EINVAL; + preg.val = saa_regs[pb->win.norm][preg.addr]; + if(copy_to_user((void *)arg, (void *)&preg, + sizeof(preg))) + return -EFAULT; + return 0; + } + + case PLANBIOCSSAAREGS: + { + struct planb_saa_regs preg; + + DEBUG("PlanB: IOCTL PLANBIOCSSAAREGS\n"); + + if(copy_from_user(&preg, arg, sizeof(preg))) + return -EFAULT; + if(preg.addr >= SAA7196_NUMREGS) + return -EINVAL; + saa_set (preg.addr, preg.val, pb); + return 0; + } + + case PLANBIOCGSTAT: + { + struct planb_stat_regs pstat; + + DEBUG("PlanB: IOCTL PLANBIOCGSTAT\n"); + + pstat.ch1_stat = in_le32(&pb->planb_base->ch1.status); + pstat.ch2_stat = in_le32(&pb->planb_base->ch2.status); + pstat.saa_stat0 = saa_status(0, pb); + pstat.saa_stat1 = saa_status(1, pb); + + if(copy_to_user((void *)arg, (void *)&pstat, + sizeof(pstat))) + return -EFAULT; + return 0; + } + + case PLANBIOCSMODE: { + int v; + + DEBUG("PlanB: IOCTL PLANBIOCSMODE\n"); + + if(copy_from_user(&v, arg, sizeof(v))) + return -EFAULT; + + switch(v) + { + case PLANB_TV_MODE: + saa_set (SAA7196_STDC, + (saa_regs[pb->win.norm][SAA7196_STDC] & + 0x7f), pb); + break; + case PLANB_VTR_MODE: + saa_set (SAA7196_STDC, + (saa_regs[pb->win.norm][SAA7196_STDC] | + 0x80), pb); + break; + default: + return -EINVAL; + break; + } + pb->win.mode = v; + return 0; + } + case PLANBIOCGMODE: { + int v=pb->win.mode; + + DEBUG("PlanB: IOCTL PLANBIOCGMODE\n"); + + if(copy_to_user(arg,&v,sizeof(v))) + return -EFAULT; + return 0; + } +#ifdef PLANB_GSCANLINE + case PLANBG_GRAB_BPL: { + int v=pb->gbytes_per_line; + + DEBUG("PlanB: IOCTL PLANBG_GRAB_BPL\n"); + + if(copy_to_user(arg,&v,sizeof(v))) + return -EFAULT; + return 0; + } +#endif /* PLANB_GSCANLINE */ + case PLANB_INTR_DEBUG: { + int i; + + DEBUG("PlanB: IOCTL PLANB_INTR_DEBUG\n"); + + if(copy_from_user(&i, arg, sizeof(i))) + return -EFAULT; + + /* avoid hang ups all together */ + for (i = 0; i < MAX_GBUFFERS; i++) { + if(pb->frame_stat[i] == GBUFFER_GRABBING) { + pb->frame_stat[i] = GBUFFER_DONE; + } + } + if(pb->grabbing) + pb->grabbing--; + wake_up_interruptible(&pb->capq); + return 0; + } + case PLANB_INV_REGS: { + int i; + struct planb_any_regs any; + + DEBUG("PlanB: IOCTL PLANB_INV_REGS\n"); + + if(copy_from_user(&any, arg, sizeof(any))) + return -EFAULT; + if(any.offset < 0 || any.offset + any.bytes > 0x400) + return -EINVAL; + if(any.bytes > 128) + return -EINVAL; + for (i = 0; i < any.bytes; i++) { + any.data[i] = + in_8((unsigned char *)pb->planb_base + + any.offset + i); + } + if(copy_to_user(arg,&any,sizeof(any))) + return -EFAULT; + return 0; + } + default: + { + DEBUG("PlanB: Unimplemented IOCTL\n"); + return -ENOIOCTLCMD; + } + /* Some IOCTLs are currently unsupported on PlanB */ + case VIDIOCGTUNER: { + DEBUG("PlanB: IOCTL VIDIOCGTUNER\n"); + goto unimplemented; } + case VIDIOCSTUNER: { + DEBUG("PlanB: IOCTL VIDIOCSTUNER\n"); + goto unimplemented; } + case VIDIOCSFREQ: { + DEBUG("PlanB: IOCTL VIDIOCSFREQ\n"); + goto unimplemented; } + case VIDIOCGFREQ: { + DEBUG("PlanB: IOCTL VIDIOCGFREQ\n"); + goto unimplemented; } + case VIDIOCKEY: { + DEBUG("PlanB: IOCTL VIDIOCKEY\n"); + goto unimplemented; } + case VIDIOCSAUDIO: { + DEBUG("PlanB: IOCTL VIDIOCSAUDIO\n"); + goto unimplemented; } + case VIDIOCGAUDIO: { + DEBUG("PlanB: IOCTL VIDIOCGAUDIO\n"); + goto unimplemented; } +unimplemented: + DEBUG(" Unimplemented\n"); + return -ENOIOCTLCMD; + } + return 0; +} + +static int planb_mmap(struct video_device *dev, const char *adr, unsigned long size) +{ + int i; + struct planb *pb = (struct planb *)dev; + unsigned long start = (unsigned long)adr; + unsigned long map_size; + + if (size > MAX_GBUFFERS * PLANB_MAX_FBUF) + return -EINVAL; + if (!pb->rawbuf) { + int err; + if((err=grabbuf_alloc(pb))) + return err; + } + for (i = 0; i < pb->rawbuf_size; i++) { + if (remap_page_range(start, virt_to_phys((void *)pb->rawbuf[i]), + PAGE_SIZE, PAGE_SHARED)) + return -EAGAIN; + start += PAGE_SIZE; + if (size <= PAGE_SIZE) + break; + size -= PAGE_SIZE; + } + return 0; +} + +static struct video_device planb_template= +{ + owner: THIS_MODULE, + name: PLANB_DEVICE_NAME, + type: VID_TYPE_OVERLAY, + hardware: VID_HARDWARE_PLANB, + open: planb_open, + close: planb_close, + read: planb_read, + write: planb_write, + ioctl: planb_ioctl, + mmap: planb_mmap, /* mmap? */ +}; + +static int init_planb(struct planb *pb) +{ + unsigned char saa_rev; + int i, result; + unsigned long flags; + + memset ((void *) &pb->win, 0, sizeof (struct planb_window)); + /* Simple sanity check */ + if(def_norm >= NUM_SUPPORTED_NORM || def_norm < 0) { + printk(KERN_ERR "PlanB: Option(s) invalid\n"); + return -2; + } + pb->win.norm = def_norm; + pb->win.mode = PLANB_TV_MODE; /* TV mode */ + pb->win.interlace=1; + pb->win.x=0; + pb->win.y=0; + pb->win.width=768; /* 640 */ + pb->win.height=576; /* 480 */ + pb->maxlines=576; +#if 0 + btv->win.cropwidth=768; /* 640 */ + btv->win.cropheight=576; /* 480 */ + btv->win.cropx=0; + btv->win.cropy=0; +#endif + pb->win.pad=0; + pb->win.bpp=4; + pb->win.depth=32; + pb->win.color_fmt=PLANB_COLOUR32; + pb->win.bpl=1024*pb->win.bpp; + pb->win.swidth=1024; + pb->win.sheight=768; +#ifdef PLANB_GSCANLINE + if((pb->gbytes_per_line = PLANB_MAXPIXELS * 4) > PAGE_SIZE + || (pb->gbytes_per_line <= 0)) + return -3; + else { + /* page align pb->gbytes_per_line for DMA purpose */ + for(i = PAGE_SIZE; pb->gbytes_per_line < (i>>1);) + i>>=1; + pb->gbytes_per_line = i; + } +#endif + pb->tab_size = PLANB_MAXLINES + 40; + pb->suspend = 0; + pb->lock = 0; + init_MUTEX(&pb->lock); + pb->ch1_cmd = 0; + pb->ch2_cmd = 0; + pb->mask = 0; + pb->priv_space = 0; + pb->offset = 0; + pb->user = 0; + pb->overlay = 0; + init_waitqueue_head(&pb->suspendq); + pb->cmd_buff_inited = 0; + pb->frame_buffer_phys = 0; + + /* Reset DMA controllers */ + planb_dbdma_stop(&pb->planb_base->ch2); + planb_dbdma_stop(&pb->planb_base->ch1); + + saa_rev = (saa_status(0, pb) & 0xf0) >> 4; + printk(KERN_INFO "PlanB: SAA7196 video processor rev. %d\n", saa_rev); + /* Initialize the SAA registers in memory and on chip */ + saa_init_regs (pb); + + /* clear interrupt mask */ + pb->intr_mask = PLANB_CLR_IRQ; + + save_flags(flags); cli(); + result = request_irq(pb->irq, planb_irq, 0, "PlanB", (void *)pb); + if (result < 0) { + if (result==-EINVAL) + printk(KERN_ERR "PlanB: Bad irq number (%d) " + "or handler\n", (int)pb->irq); + else if (result==-EBUSY) + printk(KERN_ERR "PlanB: I don't know why, " + "but IRQ %d is busy\n", (int)pb->irq); + restore_flags(flags); + return result; + } + disable_irq(pb->irq); + restore_flags(flags); + + /* Now add the template and register the device unit. */ + memcpy(&pb->video_dev,&planb_template,sizeof(planb_template)); + + pb->picture.brightness=0x90<<8; + pb->picture.contrast = 0x70 << 8; + pb->picture.colour = 0x70<<8; + pb->picture.hue = 0x8000; + pb->picture.whiteness = 0; + pb->picture.depth = pb->win.depth; + + pb->frame_stat=NULL; + init_waitqueue_head(&pb->capq); + for(i=0; igbuf_idx[i] = PLANB_MAX_FBUF * i / PAGE_SIZE; + pb->gwidth[i]=0; + pb->gheight[i]=0; + pb->gfmt[i]=0; + pb->cap_cmd[i]=NULL; +#ifndef PLANB_GSCANLINE + pb->l_fr_addr_idx[i] = MAX_GBUFFERS * (PLANB_MAX_FBUF + / PAGE_SIZE + 1) + MAX_LNUM * i; + pb->lsize[i] = 0; + pb->lnum[i] = 0; +#endif + } + pb->rawbuf=NULL; + pb->grabbing=0; + + /* enable interrupts */ + out_le32(&pb->planb_base->intr_stat, PLANB_CLR_IRQ); + pb->intr_mask = PLANB_FRM_IRQ; + enable_irq(pb->irq); + + if(video_register_device(&pb->video_dev, VFL_TYPE_GRABBER, video_nr)<0) + return -1; + + return 0; +} + +/* + * Scan for a PlanB controller, request the irq and map the io memory + */ + +static int find_planb(void) +{ + struct planb *pb; + struct device_node *planb_devices; + unsigned char dev_fn, confreg, bus; + unsigned int old_base, new_base; + unsigned int irq; + struct pci_dev *pdev; + + if (_machine != _MACH_Pmac) + return 0; + + planb_devices = find_devices("planb"); + if (planb_devices == 0) { + planb_num=0; + printk(KERN_WARNING "PlanB: no device found!\n"); + return planb_num; + } + + if (planb_devices->next != NULL) + printk(KERN_ERR "Warning: only using first PlanB device!\n"); + pb = &planbs[0]; + planb_num = 1; + + if (planb_devices->n_addrs != 1) { + printk (KERN_WARNING "PlanB: expecting 1 address for planb " + "(got %d)", planb_devices->n_addrs); + return 0; + } + + if (planb_devices->n_intrs == 0) { + printk(KERN_WARNING "PlanB: no intrs for device %s\n", + planb_devices->full_name); + return 0; + } else { + irq = planb_devices->intrs[0].line; + } + + /* Initialize PlanB's PCI registers */ + + /* There is a bug with the way OF assigns addresses + to the devices behind the chaos bridge. + control needs only 0x1000 of space, but decodes only + the upper 16 bits. It therefore occupies a full 64K. + OF assigns the planb controller memory within this space; + so we need to change that here in order to access planb. */ + + /* We remap to 0xf1000000 in hope that nobody uses it ! */ + + bus = (planb_devices->addrs[0].space >> 16) & 0xff; + dev_fn = (planb_devices->addrs[0].space >> 8) & 0xff; + confreg = planb_devices->addrs[0].space & 0xff; + old_base = planb_devices->addrs[0].address; + new_base = 0xf1000000; + + DEBUG("PlanB: Found on bus %d, dev %d, func %d, " + "membase 0x%x (base reg. 0x%x)\n", + bus, PCI_SLOT(dev_fn), PCI_FUNC(dev_fn), old_base, confreg); + + pdev = pci_find_slot (bus, dev_fn); + if (!pdev) { + printk(KERN_ERR "cannot find slot\n"); + /* XXX handle error */ + } + + /* Enable response in memory space, bus mastering, + use memory write and invalidate */ + pci_write_config_word (pdev, PCI_COMMAND, + PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER | + PCI_COMMAND_INVALIDATE); + /* Set PCI Cache line size & latency timer */ + pci_write_config_byte (pdev, PCI_CACHE_LINE_SIZE, 0x8); + pci_write_config_byte (pdev, PCI_LATENCY_TIMER, 0x40); + + /* Set the new base address */ + pci_write_config_dword (pdev, confreg, new_base); + + planb_regs = (volatile struct planb_registers *) + ioremap (new_base, 0x400); + pb->planb_base = planb_regs; + pb->planb_base_phys = (struct planb_registers *)new_base; + pb->irq = irq; + + return planb_num; +} + +static void release_planb(void) +{ + int i; + struct planb *pb; + + for (i=0;iplanb_base->ch2); + planb_dbdma_stop(&pb->planb_base->ch1); + + /* clear and free interrupts */ + pb->intr_mask = PLANB_CLR_IRQ; + out_le32 (&pb->planb_base->intr_stat, PLANB_CLR_IRQ); + free_irq(pb->irq, pb); + + /* make sure all allocated memory are freed */ + planb_prepare_close(pb); + + printk(KERN_INFO "PlanB: unregistering with v4l\n"); + video_unregister_device(&pb->video_dev); + + /* note that iounmap() does nothing on the PPC right now */ + iounmap ((void *)pb->planb_base); + } +} + +#ifdef MODULE + +int init_module(void) +{ +#else +int __init init_planbs(struct video_init *unused) +{ +#endif + int i; + + if (find_planb()<=0) + return -EIO; + + for (i=0; ifbuffer; - while (size>0) { - unsigned long page = virt_to_phys((void*)pos); - if (remap_page_range(start, page, PAGE_SIZE, PAGE_SHARED)) - return -EAGAIN; - start += PAGE_SIZE; - pos += PAGE_SIZE; - size -= PAGE_SIZE; - } + pos = virt_to_phys(ztv->fbuffer); + if (remap_page_range(start, pos, size, PAGE_SHARED)) + return -EAGAIN; return 0; } diff -urpN linux-2.4.9-linus/drivers/media/video/zr36120_mem.c linux-2.4.9-larpage/drivers/media/video/zr36120_mem.c --- linux-2.4.9-linus/drivers/media/video/zr36120_mem.c 2000-08-07 21:01:36.000000000 -0700 +++ linux-2.4.9-larpage/drivers/media/video/zr36120_mem.c 2002-11-20 02:02:49.000000000 -0800 @@ -24,9 +24,6 @@ #include #include #include -#ifdef CONFIG_BIGPHYS_AREA -#include -#endif #include "zr36120.h" #include "zr36120_mem.h" @@ -38,19 +35,11 @@ void* bmalloc(unsigned long size) { void* mem; -#ifdef CONFIG_BIGPHYS_AREA - mem = bigphysarea_alloc_pages(size/PAGE_SIZE, 1, GFP_KERNEL); -#else - /* - * The following function got a lot of memory at boottime, - * so we know its always there... - */ mem = (void*)__get_free_pages(GFP_USER|GFP_DMA,get_order(size)); -#endif if (mem) { unsigned long adr = (unsigned long)mem; - while (size > 0) { - mem_map_reserve(virt_to_page(phys_to_virt(adr))); + while ((long)size > 0) { + mem_map_reserve(virt_to_page(adr)); adr += PAGE_SIZE; size -= PAGE_SIZE; } @@ -63,15 +52,10 @@ void bfree(void* mem, unsigned long size if (mem) { unsigned long adr = (unsigned long)mem; unsigned long siz = size; - while (siz > 0) { - mem_map_unreserve(virt_to_page(phys_to_virt(adr))); + while ((long)siz > 0) { + mem_map_unreserve(virt_to_page(adr)); adr += PAGE_SIZE; siz -= PAGE_SIZE; - } -#ifdef CONFIG_BIGPHYS_AREA - bigphysarea_free_pages(mem); -#else - free_pages((unsigned long)mem,get_order(size)); -#endif } + free_pages((unsigned long)mem,get_order(size)); } diff -urpN linux-2.4.9-linus/drivers/media/video/zr36120_mem.c.orig linux-2.4.9-larpage/drivers/media/video/zr36120_mem.c.orig --- linux-2.4.9-linus/drivers/media/video/zr36120_mem.c.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/drivers/media/video/zr36120_mem.c.orig 2002-11-20 02:02:49.000000000 -0800 @@ -0,0 +1,66 @@ +/* + zr36120_mem.c - Zoran 36120/36125 based framegrabbers + + Copyright (C) 1998-1999 Pauline Middelink + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +*/ + +#include +#include +#include +#include +#include +#include + +#include "zr36120.h" +#include "zr36120_mem.h" + +/*******************************/ +/* Memory management functions */ +/*******************************/ + +void* bmalloc(unsigned long size) +{ + void* mem; + mem = (void*)__get_free_pages(GFP_USER|GFP_DMA,get_order(size)); + if (mem) { + unsigned long adr = (unsigned long)mem; + while ((long)size > 0) { + mem_map_reserve(virt_to_page(adr)); + adr += PAGE_SIZE; + size -= PAGE_SIZE; + } + } + return mem; +} + +void bfree(void* mem, unsigned long size) +{ + if (mem) { + unsigned long adr = (unsigned long)mem; + unsigned long siz = size; + while (siz > 0) { + mem_map_unreserve(virt_to_page(phys_to_virt(adr))); + adr += PAGE_SIZE; + siz -= PAGE_SIZE; + } +#ifdef CONFIG_BIGPHYS_AREA + bigphysarea_free_pages(mem); +#else + free_pages((unsigned long)mem,get_order(size)); +#endif + } +} diff -urpN linux-2.4.9-linus/drivers/mtd/bootldr.c linux-2.4.9-larpage/drivers/mtd/bootldr.c --- linux-2.4.9-linus/drivers/mtd/bootldr.c 2001-06-12 10:30:27.000000000 -0700 +++ linux-2.4.9-larpage/drivers/mtd/bootldr.c 2002-11-20 02:02:49.000000000 -0800 @@ -89,11 +89,11 @@ int parse_bootldr_partitions(struct mtd_ printk(__FUNCTION__ ": partition_table_offset=%#lx\n", partition_table_offset); /* Read the partition table */ - partition_table = (struct BootldrFlashPartitionTable *)kmalloc(PAGE_SIZE, GFP_KERNEL); + partition_table = (struct BootldrFlashPartitionTable *)kmalloc(MMUPAGE_SIZE, GFP_KERNEL); if (!partition_table) return -ENOMEM; ret = master->read(master, partition_table_offset, - PAGE_SIZE, &retlen, (void *)partition_table); + MMUPAGE_SIZE, &retlen, (void *)partition_table); if (ret) goto out; diff -urpN linux-2.4.9-linus/drivers/mtd/redboot.c linux-2.4.9-larpage/drivers/mtd/redboot.c --- linux-2.4.9-linus/drivers/mtd/redboot.c 2001-06-12 10:30:27.000000000 -0700 +++ linux-2.4.9-larpage/drivers/mtd/redboot.c 2002-11-20 02:02:49.000000000 -0800 @@ -45,14 +45,14 @@ int parse_redboot_partitions(struct mtd_ char *names; int namelen = 0; - buf = kmalloc(PAGE_SIZE, GFP_KERNEL); + buf = kmalloc(MMUPAGE_SIZE, GFP_KERNEL); if (!buf) return -ENOMEM; /* Read the start of the last erase block */ ret = master->read(master, master->size - master->erasesize, - PAGE_SIZE, &retlen, (void *)buf); + MMUPAGE_SIZE, &retlen, (void *)buf); if (ret) goto out; @@ -67,7 +67,7 @@ int parse_redboot_partitions(struct mtd_ goto out; } - for (i = 0; i < PAGE_SIZE / sizeof(struct fis_image_desc); i++) { + for (i = 0; i < MMUPAGE_SIZE / sizeof(struct fis_image_desc); i++) { struct fis_list *new_fl, **prev; if (buf[i].name[0] == 0xff) diff -urpN linux-2.4.9-linus/drivers/net/lasi_82596.c linux-2.4.9-larpage/drivers/net/lasi_82596.c --- linux-2.4.9-linus/drivers/net/lasi_82596.c 2001-08-12 10:51:42.000000000 -0700 +++ linux-2.4.9-larpage/drivers/net/lasi_82596.c 2002-11-20 02:02:50.000000000 -0800 @@ -975,12 +975,12 @@ static int i596_test(struct net_device * data = virt_to_dma(lp,tint); tint[1] = -1; - CHECK_WBACK(tint,PAGE_SIZE); + CHECK_WBACK(tint,MMUPAGE_SIZE); MPU_PORT(dev, 1, data); for(data = 1000000; data; data--) { - CHECK_INV(tint,PAGE_SIZE); + CHECK_INV(tint,MMUPAGE_SIZE); if(tint[1] != -1) break; diff -urpN linux-2.4.9-linus/drivers/net/sun3lance.c linux-2.4.9-larpage/drivers/net/sun3lance.c --- linux-2.4.9-linus/drivers/net/sun3lance.c 2001-07-04 11:50:39.000000000 -0700 +++ linux-2.4.9-larpage/drivers/net/sun3lance.c 2002-11-20 02:02:50.000000000 -0800 @@ -276,7 +276,7 @@ static int __init lance_probe( struct ne if(!(iopte & SUN3_PAGE_TYPE_IO)) /* this an io page? */ continue; - if(((iopte & SUN3_PAGE_PGNUM_MASK) << PAGE_SHIFT) == + if(((iopte & SUN3_PAGE_PGNUM_MASK) << SUN3_PTE_SIZE_BITS) == LANCE_OBIO) { found = 1; break; diff -urpN linux-2.4.9-linus/drivers/pcmcia/hd64465_ss.c linux-2.4.9-larpage/drivers/pcmcia/hd64465_ss.c --- linux-2.4.9-linus/drivers/pcmcia/hd64465_ss.c 2001-07-10 20:16:30.000000000 -0700 +++ linux-2.4.9-larpage/drivers/pcmcia/hd64465_ss.c 2002-11-20 02:02:50.000000000 -0800 @@ -661,8 +661,8 @@ static int hs_set_io_map(unsigned int so paddrbase = virt_to_phys((void*)(sp->mem_base + 2 * HD64465_PCC_WINDOW)); vaddrbase = (unsigned long)sp->io_vma->addr; - pstart = io->start & PAGE_MASK; - psize = ((io->stop + PAGE_SIZE) & PAGE_MASK) - pstart; + pstart = io->start & MMUPAGE_MASK; + psize = ((io->stop + MMUPAGE_SIZE) & MMUPAGE_MASK) - pstart; /* * Change PTEs in only that portion of the mapping requested diff -urpN linux-2.4.9-linus/drivers/pnp/isapnp_proc.c linux-2.4.9-larpage/drivers/pnp/isapnp_proc.c --- linux-2.4.9-linus/drivers/pnp/isapnp_proc.c 2001-01-17 13:29:14.000000000 -0800 +++ linux-2.4.9-larpage/drivers/pnp/isapnp_proc.c 2002-11-20 02:02:50.000000000 -0800 @@ -164,7 +164,7 @@ static int isapnp_info_entry_open(struct isapnp_alloc(sizeof(isapnp_info_buffer_t)); if (!buffer) return -ENOMEM; - buffer->len = 4 * PAGE_SIZE; + buffer->len = 16 * 1024; buffer->buffer = vmalloc(buffer->len); if (!buffer->buffer) { kfree(buffer); diff -urpN linux-2.4.9-linus/drivers/sbus/char/flash.c linux-2.4.9-larpage/drivers/sbus/char/flash.c --- linux-2.4.9-linus/drivers/sbus/char/flash.c 2001-03-06 22:44:16.000000000 -0800 +++ linux-2.4.9-larpage/drivers/sbus/char/flash.c 2002-11-20 02:02:50.000000000 -0800 @@ -63,12 +63,12 @@ flash_mmap(struct file *file, struct vm_ } spin_unlock(&flash_lock); - if ((vma->vm_pgoff << PAGE_SHIFT) > size) + if ((vma->vm_pgoff << MMUPAGE_SHIFT) > size) return -ENXIO; - addr += (vma->vm_pgoff << PAGE_SHIFT); + addr += (vma->vm_pgoff << MMUPAGE_SHIFT); - if (vma->vm_end - (vma->vm_start + (vma->vm_pgoff << PAGE_SHIFT)) > size) - size = vma->vm_end - (vma->vm_start + (vma->vm_pgoff << PAGE_SHIFT)); + if (vma->vm_end - (vma->vm_start + (vma->vm_pgoff << MMUPAGE_SHIFT)) > size) + size = vma->vm_end - (vma->vm_start + (vma->vm_pgoff << MMUPAGE_SHIFT)); pgprot_val(vma->vm_page_prot) &= ~(_PAGE_CACHE); pgprot_val(vma->vm_page_prot) |= _PAGE_E; diff -urpN linux-2.4.9-linus/drivers/sbus/char/zs.c linux-2.4.9-larpage/drivers/sbus/char/zs.c --- linux-2.4.9-linus/drivers/sbus/char/zs.c 2001-06-29 19:38:26.000000000 -0700 +++ linux-2.4.9-larpage/drivers/sbus/char/zs.c 2002-11-20 02:02:51.000000000 -0800 @@ -2017,7 +2017,7 @@ static struct sun_zslayout * __init get_ if (central_bus == NULL) { mapped_addr = sbus_ioremap(&sdev->resource[0], 0, - PAGE_SIZE, "Zilog Registers"); + MMUPAGE_SIZE, "Zilog Registers"); } else { struct linux_prom_registers zsregs[1]; int err; @@ -2078,7 +2078,7 @@ static struct sun_zslayout * __init get_ /* Translate PROM's mapping we captured at boot * time into physical address. */ - base += ((unsigned long)vaddr[0] & ~PAGE_MASK); + base += ((unsigned long)vaddr[0] & ~MMUPAGE_MASK); return (struct sun_zslayout *) base; } } diff -urpN linux-2.4.9-linus/drivers/scsi/53c7,8xx.c linux-2.4.9-larpage/drivers/scsi/53c7,8xx.c --- linux-2.4.9-linus/drivers/scsi/53c7,8xx.c 2001-04-27 14:04:32.000000000 -0700 +++ linux-2.4.9-larpage/drivers/scsi/53c7,8xx.c 2002-11-20 02:02:51.000000000 -0800 @@ -5392,7 +5392,7 @@ print_insn (struct Scsi_Host *host, cons * to use vverify()? */ - if (virt_to_phys((void *)insn) < PAGE_SIZE || + if (virt_to_phys((void *)insn) < MMUPAGE_SIZE || virt_to_phys((void *)(insn + 8)) > virt_to_phys(high_memory) || ((((dcmd = (insn[0] >> 24) & 0xff) & DCMD_TYPE_MMI) == DCMD_TYPE_MMI) && virt_to_phys((void *)(insn + 12)) > virt_to_phys(high_memory))) { @@ -6385,7 +6385,8 @@ dump_events (struct Scsi_Host *host, int static int check_address (unsigned long addr, int size) { - return (virt_to_phys((void *)addr) < PAGE_SIZE || virt_to_phys((void *)(addr + size)) > virt_to_phys(high_memory) ? -1 : 0); + return (virt_to_phys((void *)addr) < MMUPAGE_SIZE || + virt_to_phys((void *)(addr + size)) > virt_to_phys(high_memory))? -1: 0; } #ifdef MODULE diff -urpN linux-2.4.9-linus/drivers/scsi/53c7xx.c linux-2.4.9-larpage/drivers/scsi/53c7xx.c --- linux-2.4.9-linus/drivers/scsi/53c7xx.c 2001-04-13 20:26:07.000000000 -0700 +++ linux-2.4.9-larpage/drivers/scsi/53c7xx.c 2002-11-20 02:02:51.000000000 -0800 @@ -5086,7 +5086,7 @@ print_insn (struct Scsi_Host *host, cons * to use vverify()? */ - if (virt_to_phys((void *)insn) < PAGE_SIZE || + if (virt_to_phys((void *)insn) < MMUPAGE_SIZE || virt_to_phys((void *)(insn + 8)) > virt_to_phys(high_memory) || ((((dcmd = (insn[0] >> 24) & 0xff) & DCMD_TYPE_MMI) == DCMD_TYPE_MMI) && virt_to_phys((void *)(insn + 12)) > virt_to_phys(high_memory))) { @@ -6071,7 +6071,8 @@ dump_events (struct Scsi_Host *host, int static int check_address (unsigned long addr, int size) { - return (virt_to_phys((void *)addr) < PAGE_SIZE || virt_to_phys((void *)(addr + size)) > virt_to_phys(high_memory) ? -1 : 0); + return (virt_to_phys((void *)addr) < MMUPAGE_SIZE || + virt_to_phys((void *)(addr + size)) > virt_to_phys(high_memory))? -1: 0; } #ifdef MODULE diff -urpN linux-2.4.9-linus/drivers/scsi/esp.c linux-2.4.9-larpage/drivers/scsi/esp.c --- linux-2.4.9-linus/drivers/scsi/esp.c 2001-02-18 19:49:55.000000000 -0800 +++ linux-2.4.9-larpage/drivers/scsi/esp.c 2002-11-20 02:02:52.000000000 -0800 @@ -1803,7 +1803,7 @@ after_nego_msg_built: sbus_writel(tmp, esp->dregs + DMA_CSR); if (esp->dma->revision == dvmaesc1) { if (i) /* Workaround ESC gate array SBUS rerun bug. */ - sbus_writel(PAGE_SIZE, esp->dregs + DMA_COUNT); + sbus_writel(MMUPAGE_SIZE, esp->dregs + DMA_COUNT); } sbus_writel(esp->esp_command_dvma, esp->dregs + DMA_ADDR); @@ -2262,8 +2262,8 @@ static void dma_setup(struct esp *esp, _ __u32 src = addr; __u32 dest = src + count; - if (dest & (PAGE_SIZE - 1)) - count = PAGE_ALIGN(count); + if (dest & (MMUPAGE_SIZE - 1)) + count = MMUPAGE_ALIGN(count); sbus_writel(count, esp->dregs + DMA_COUNT); } sbus_writel(addr, esp->dregs + DMA_ADDR); diff -urpN linux-2.4.9-linus/drivers/scsi/ips.c linux-2.4.9-larpage/drivers/scsi/ips.c --- linux-2.4.9-linus/drivers/scsi/ips.c 2001-08-16 09:49:49.000000000 -0700 +++ linux-2.4.9-larpage/drivers/scsi/ips.c 2002-11-20 02:02:52.000000000 -0800 @@ -2303,7 +2303,7 @@ ips_make_passthru(ips_ha_t *ha, Scsi_Cmn ha->save_ioctl_order = ha->ioctl_order; ha->save_ioctl_datasize = ha->ioctl_datasize; ha->ioctl_data = ips_FlashData; - ha->ioctl_order = 7; + ha->ioctl_order = get_order(IPS_IMAGE_SIZE); ha->ioctl_datasize = IPS_IMAGE_SIZE; } diff -urpN linux-2.4.9-linus/drivers/scsi/megaraid.c linux-2.4.9-larpage/drivers/scsi/megaraid.c --- linux-2.4.9-linus/drivers/scsi/megaraid.c 2001-08-12 10:51:41.000000000 -0700 +++ linux-2.4.9-larpage/drivers/scsi/megaraid.c 2002-11-20 02:02:52.000000000 -0800 @@ -1753,7 +1753,7 @@ static mega_scb *mega_ioctl (mega_host_c switch (data[0]) { case FW_FIRE_WRITE: case FW_FIRE_FLASH: - if ((ulong) user_area & (PAGE_SIZE - 1)) { + if ((ulong) user_area & (MMUPAGE_SIZE - 1)) { printk ("megaraid:user address not aligned on 4K boundary.Error.\n"); SCpnt->result = (DID_ERROR << 16); @@ -1783,7 +1783,7 @@ static mega_scb *mega_ioctl (mega_host_c case DCMD_GET_DISK_CONFIG: { if ((ulong) pScb-> - buff_ptr & (PAGE_SIZE - 1)) { + buff_ptr & (MMUPAGE_SIZE - 1)) { printk ("megaraid:user address not sufficient Error.\n"); SCpnt->result = diff -urpN linux-2.4.9-linus/drivers/scsi/megaraid.c.orig linux-2.4.9-larpage/drivers/scsi/megaraid.c.orig --- linux-2.4.9-linus/drivers/scsi/megaraid.c.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/drivers/scsi/megaraid.c.orig 2002-11-20 02:02:52.000000000 -0800 @@ -0,0 +1,5012 @@ +/*=================================================================== + * + * Linux MegaRAID device driver + * + * Copyright 2001 American Megatrends Inc. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version + * 2 of the License, or (at your option) any later version. + * + * Version : v1.17a (July 13, 2001) + * + * Description: Linux device driver for AMI MegaRAID controller + * + * Supported controllers: MegaRAID 418, 428, 438, 466, 762, 467, 471, 490 + * 493. + * History: + * + * Version 0.90: + * Original source contributed by Dell; integrated it into the kernel and + * cleaned up some things. Added support for 438/466 controllers. + * Version 0.91: + * Aligned mailbox area on 16-byte boundary. + * Added schedule() at the end to properly clean up. + * Made improvements for conformity to linux driver standards. + * + * Version 0.92: + * Added support for 2.1 kernels. + * Reads from pci_dev struct, so it's not dependent on pcibios. + * Added some missing virt_to_bus() translations. + * Added support for SMP. + * Changed global cli()'s to spinlocks for 2.1, and simulated + * spinlocks for 2.0. + * Removed setting of SA_INTERRUPT flag when requesting Irq. + * + * Version 0.92ac: + * Small changes to the comments/formatting. Plus a couple of + * added notes. Returned to the authors. No actual code changes + * save printk levels. + * 8 Oct 98 Alan Cox + * + * Merged with 2.1.131 source tree. + * 12 Dec 98 K. Baranowski + * + * Version 0.93: + * Added support for vendor specific ioctl commands (M_RD_IOCTL_CMD+xxh) + * Changed some fields in MEGARAID struct to better values. + * Added signature check for Rp controllers under 2.0 kernels + * Changed busy-wait loop to be time-based + * Fixed SMP race condition in isr + * Added kfree (sgList) on release + * Added #include linux/version.h to megaraid.h for hosts.h + * Changed max_id to represent max logical drives instead of targets. + * + * Version 0.94: + * Got rid of some excess locking/unlocking + * Fixed slight memory corruption problem while memcpy'ing into mailbox + * Changed logical drives to be reported as luns rather than targets + * Changed max_id to 16 since it is now max targets/chan again. + * Improved ioctl interface for upcoming megamgr + * + * Version 0.95: + * Fixed problem of queueing multiple commands to adapter; + * still has some strange problems on some setups, so still + * defaults to single. To enable parallel commands change + * #define MULTI_IO in megaraid.h + * Changed kmalloc allocation to be done in beginning. + * Got rid of C++ style comments + * + * Version 0.96: + * 762 fully supported. + * + * Version 0.97: + * Changed megaraid_command to use wait_queue. + * + * Version 1.00: + * Checks to see if an irq occurred while in isr, and runs through + * routine again. + * Copies mailbox to temp area before processing in isr + * Added barrier() in busy wait to fix volatility bug + * Uses separate list for freed Scbs, keeps track of cmd state + * Put spinlocks around entire queue function for now... + * Full multi-io commands working stablely without previous problems + * Added skipXX LILO option for Madrona motherboard support + * + * Version 1.01: + * Fixed bug in mega_cmd_done() for megamgr control commands, + * the host_byte in the result code from the scsi request to + * scsi midlayer is set to DID_BAD_TARGET when adapter's + * returned codes are 0xF0 and 0xF4. + * + * Version 1.02: + * Fixed the tape drive bug by extending the adapter timeout value + * for passthrough command to 60 seconds in mega_build_cmd(). + * + * Version 1.03: + * Fixed Madrona support. + * Changed the adapter timeout value from 60 sec in 1.02 to 10 min + * for bigger and slower tape drive. + * Added driver version printout at driver loadup time + * + * Version 1.04 + * Added code for 40 ld FW support. + * Added new ioctl command 0x81 to support NEW_READ/WRITE_CONFIG with + * data area greater than 4 KB, which is the upper bound for data + * tranfer through scsi_ioctl interface. + * The additional 32 bit field for 64bit address in the newly defined + * mailbox64 structure is set to 0 at this point. + * + * Version 1.05 + * Changed the queing implementation for handling SCBs and completed + * commands. + * Added spinlocks in the interrupt service routine to enable the driver + * function in the SMP environment. + * Fixed the problem of unnecessary aborts in the abort entry point, which + * also enables the driver to handle large amount of I/O requests for + * long duration of time. + * Version 1.06 + * Intel Release + * Version 1.07 + * Removed the usage of uaccess.h file for kernel versions less than + * 2.0.36, as this file is not present in those versions. + * + * Version 108 + * Modified mega_ioctl so that 40LD megamanager would run + * Made some changes for 2.3.XX compilation , esp wait structures + * Code merge between 1.05 and 1.06 . + * Bug fixed problem with ioctl interface for concurrency between + * 8ld and 40ld firwmare + * Removed the flawed semaphore logic for handling new config command + * Added support for building own scatter / gather list for big user + * mode buffers + * Added /proc file system support ,so that information is available in + * human readable format + * + * Version 1a08 + * Changes for IA64 kernels. Checked for CONFIG_PROC_FS flag + * + * Version 1b08 + * Include file changes. + * Version 1b08b + * Change PCI ID value for the 471 card, use #defines when searching + * for megaraid cards. + * + * Version 1.10 + * + * I) Changes made to make following ioctl commands work in 0x81 interface + * a)DCMD_DELETE_LOGDRV + * b)DCMD_GET_DISK_CONFIG + * c)DCMD_DELETE_DRIVEGROUP + * d)NC_SUBOP_ENQUIRY3 + * e)DCMD_CHANGE_LDNO + * f)DCMD_CHANGE_LOOPID + * g)DCMD_FC_READ_NVRAM_CONFIG + * h)DCMD_WRITE_CONFIG + * II) Added mega_build_kernel_sg function + * III)Firmware flashing option added + * + * Version 1.10a + * + * I)Dell updates included in the source code. + * Note: This change is not tested due to the unavailability of IA64 kernel + * and it is in the #ifdef DELL_MODIFICATION macro which is not defined + * + * Version 1.10b + * + * I)In M_RD_IOCTL_CMD_NEW command the wrong way of copying the data + * to the user address corrected + * + * Version 1.10c + * + * I) DCMD_GET_DISK_CONFIG opcode updated for the firmware changes. + * + * Version 1.11 + * I) Version number changed from 1.10c to 1.11 + * II) DCMD_WRITE_CONFIG(0x0D) command in the driver changed from + * scatter/gather list mode to direct pointer mode.. + * Fixed bug of undesirably detecting HP onboard controllers which + * are disabled. + * + * Version 1.12 (Sep 21, 2000) + * + * I. Changes have been made for Dynamic DMA mapping in IA64 platform. + * To enable all these changes define M_RD_DYNAMIC_DMA_SUPPORT in megaraid.h + * II. Got rid of windows mode comments + * III. Removed unwanted code segments + * IV. Fixed bug of HP onboard controller information (commented with + * MEGA_HP_FIX) + * + * Version 1a12 + * I. reboot notifier and new ioctl changes ported from 1c09 + * + * Version 1b12 + * I. Changes in new ioctl interface routines ( Nov 06, 2000 ) + * + * Version 1c12 + * I. Changes in new ioctl interface routines ( Nov 07, 2000 ) + * + * Version 1d12 + * I. Compilation error under kernel 2.4.0 for 32-bit machine in mega_ioctl + * + * Version 1e12, 1f12 + * 1. Fixes for pci_map_single, pci_alloc_consistent along with mailbox + * alignment + * + * Version 1.13beta + * Added Support for Full 64bit address space support. If firmware + * supports 64bit, it goes to 64 bit mode even on x86 32bit + * systems. Data Corruption Issues while running on test9 kernel + * on IA64 systems. This issue not seen on test11 on x86 system + * + * Version 1.13c + * 1. Resolved Memory Leak when using M_RD_IOCTL_CMD interface + * 2. Resolved Queuing problem when MailBox Blocks + * 3. Added unregister_reboot_notifier support + * + * Version 1.13d + * Experimental changes in interfacing with the controller in ISR + * + * Version 1.13e + * Fixed Broken 2.2.XX compilation changes + misc changes + * + * Version 1.13f to 1.13i + * misc changes + code clean up + * Cleaned up the ioctl code and added set_mbox_xfer_addr() + * Support for START_DEV (6) + * + * Version 1.13j + * Moved some code to megaraid.h file, replaced some hard coded values + * with respective macros. Changed some functions to static + * + * Version 1.13k + * Only some idendation correction to 1.13j + * + * Version 1.13l , 1.13m, 1.13n, 1.13o + * Minor Identation changes + misc changes + * + * Version 1.13q + * Paded the new uioctl_t MIMD structure for maintaining alignment + * and size across 32 / 64 bit platforms + * Changed the way MIMD IOCTL interface used virt_to_bus() to use pci + * memory location + * + * Version 1.13r + * 2.4.xx SCSI Changes. + * + * Version 1.13s + * Stats counter fixes + * Temporary fix for some 64 bit firmwares in 2.4.XX kernels + * + * Version 1.13t + * Support for 64bit version of READ/WRITE/VIEW DISK CONFIG + * + * Version 1.14 + * Did away with MEGADEV_IOCTL flag. It is now standard part of driver + * without need for a special #define flag + * Disabled old scsi ioctl path for kernel versions > 2.3.xx. This is due + * to the nature in which the new scsi code queues a new scsi command to + * controller during SCSI IO Completion + * Driver now checks for sub-system vendor id before taking ownership of + * the controller + * + * Version 1.14a + * Added Host re-ordering + * + * Version 1.14b + * Corrected some issue which caused the older cards not to work + * + * Version 1.14c + * IOCTL changes for not handling the non-64bit firmwares under 2.4.XX + * kernel + * + * Version 1.14d + * Fixed Various MIMD Synchronization Issues + * + * Version 1.14e + * Fixed the error handling during card initialization + * + * Version 1.14f + * Multiple invocations of mimd phase I ioctl stalls the cpu. Replaced + * spinlock with semaphore(mutex) + * + * Version 1.14g + * Fixed running out of scbs issues while running MIMD apps under heavy IO + * + * Version 1.14g-ac - 02/03/01 + * Reformatted to Linux format so I could compare to old one and cross + * check bug fixes + * Re fixed the assorted missing 'static' cases + * Removed some unneeded version checks + * Cleaned up some of the VERSION checks in the code + * Left 2.0 support but removed 2.1.x support. + * Collected much of the compat glue into one spot + * + * Version 1.14g-ac2 - 22/03/01 + * Fixed a non obvious dereference after free in the driver unload path + * + * Version 1.14i + * changes for making 32bit application run on IA64 + * + * Version 1.14j + * Tue Mar 13 14:27:54 EST 2001 - AM + * Changes made in the driver to be able to run applications if the + * system has memory >4GB. + * + * + * Version 1.14k + * Thu Mar 15 18:38:11 EST 2001 - AM + * + * Firmware version check removed if subsysid==0x1111 and + * subsysvid==0x1111, since its not yet initialized. + * + * changes made to correctly calculate the base in mega_findCard. + * + * Driver informational messages now appear on the console as well as + * with dmesg + * + * Older ioctl interface is returned failure on newer(2.4.xx) kernels. + * + * Inclusion of "modversions.h" is still a debatable question. It is + * included anyway with this release. + * + * Version 1.14l + * Mon Mar 19 17:39:46 EST 2001 - AM + * + * Assorted changes to remove compilation error in 1.14k when compiled + * with kernel < 2.4.0 + * + * Version 1.14m + * Tue Mar 27 12:09:22 EST 2001 - AM + * + * Added support for extended CDBs ( > 10 bytes ) and OBDR ( One Button + * Disaster Recovery ) feature. + * + * + * Version 1.14n + * Tue Apr 10 14:28:13 EDT 2001 - AM + * + * "modeversions.h" is no longer included in the code. + * 2.4.xx style mutex initialization used for older kernels also + * + * Version 1.14o + * Wed Apr 18 17:47:26 EDT 2001 - PJ + * + * Before returning status for 'inquiry', we first check if request buffer + * is SG list, and then return appropriate status + * + * Version 1.14p + * Wed Apr 25 13:44:48 EDT 2001 - PJ + * + * SCSI result made appropriate in case of check conditions for extended + * passthru commands + * + * Do not support lun >7 for physically accessed devices + * + * + * Version 1.15 + * Thu Apr 19 09:38:38 EDT 2001 - AM + * + * 1.14l rollover to 1.15 - merged with main trunk after 1.15d + * + * Version 1.15b + * Wed May 16 20:10:01 EDT 2001 - AM + * + * "modeversions.h" is no longer included in the code. + * 2.4.xx style mutex initialization used for older kernels also + * Brought in-sync with Alan's changes in 2.4.4 + * Note: 1.15a is on OBDR branch(main trunk), and is not merged with yet. + * + * Version 1.15c + * Mon May 21 23:10:42 EDT 2001 - AM + * + * ioctl interface uses 2.4.x conforming pci dma calls + * similar calls used for older kernels + * + * Version 1.15d + * Wed May 30 17:30:41 EDT 2001 - AM + * + * NULL is not a valid first argument for pci_alloc_consistent() on + * IA64(2.4.3-2.10.1). Code shuffling done in ioctl interface to get + * "pci_dev" before making calls to pci interface routines. + * + * Version 1.16pre + * Fri Jun 1 19:40:48 EDT 2001 - AM + * + * 1.14p and 1.15d merged + * ROMB support added + * + * Version 1.16-pre1 + * Mon Jun 4 15:01:01 EDT 2001 - AM + * + * Non-ROMB firmware do no DMA support 0xA9 command. Value 0xFF + * (all channels are raid ) is chosen for those firmware. + * + * Version 1.16-pre2 + * Mon Jun 11 18:15:31 EDT 2001 - AM + * + * Changes for boot from any logical drive + * + * Version 1.16 + * Tue Jun 26 18:07:02 EDT 2001 - AM + * + * branched at 1.14p + * + * Check added for HP 1M/2M controllers if having firmware H.01.07 or + * H.01.08. If found, disable 64 bit support since these firmware have + * limitations for 64 bit addressing + * + * + * Version 1.17 + * Thu Jul 12 11:14:09 EDT 2001 - AM + * + * 1.16pre2 and 1.16 merged. + * + * init_MUTEX and init_MUTEX_LOCKED are defined in 2.2.19. Pre-processor + * statements are added for them + * + * Linus's 2.4.7pre3 kernel introduces a new field 'max_sectors' in Scsi_Host + * structure, to improve IO performance. + * + * + * Version 1.17a + * Fri Jul 13 18:44:01 EDT 2001 - AM + * + * Starting from kernel 2.4.x, LUN is not < 8 - following SCSI-III. So to have + * our current formula working to calculate logical drive number, return + * failure for LUN > 7 + * + * Version 1.17a-ac + * Mon Aug 6 14:59:29 BST 2001 - "Michael Johnson" + * + * Make the HP print formatting and check for buggy firmware runtime not + * ifdef dependant. + * + * BUGS: + * Some older 2.1 kernels (eg. 2.1.90) have a bug in pci.c that + * fails to detect the controller as a pci device on the system. + * + * Timeout period for upper scsi layer, i.e. SD_TIMEOUT in + * /drivers/scsi/sd.c, is too short for this controller. SD_TIMEOUT + * value must be increased to (30 * HZ) otherwise false timeouts + * will occur in the upper layer. + * + * Never set skip_id. The existing PCI code the megaraid uses fails + * to properly check the vendor subid in some cases. Setting this then + * makes it steal other i960's and crashes some boxes + * + * Far too many ifdefs for versions. + * + *===================================================================*/ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include /* for kmalloc() */ +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,1,0) /* 0x20100 */ +#include +#else +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,3,0) /* 0x20300 */ +#include +#else +#include +#endif +#endif + +#include +#include + +#if LINUX_VERSION_CODE > KERNEL_VERSION(2,0,24) /* 0x020024 */ +#include +#endif + +/* + * These header files are required for Shutdown Notification routines + */ +#include +#include +#include + +#include "sd.h" +#include "scsi.h" +#include "hosts.h" + +#include "megaraid.h" + +/* + *================================================================ + * #Defines + *================================================================ + */ + +#define MAX_SERBUF 160 +#define COM_BASE 0x2f8 + +static ulong RDINDOOR (mega_host_config * megaCfg) +{ + return readl (megaCfg->base + 0x20); +} + +static void WRINDOOR (mega_host_config * megaCfg, ulong value) +{ + writel (value, megaCfg->base + 0x20); +} + +static ulong RDOUTDOOR (mega_host_config * megaCfg) +{ + return readl (megaCfg->base + 0x2C); +} + +static void WROUTDOOR (mega_host_config * megaCfg, ulong value) +{ + writel (value, megaCfg->base + 0x2C); +} + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,2,0) /* 0x020200 */ +#include +#define cpuid smp_processor_id() +#endif + +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,4,4) +#define scsi_set_pci_device(x,y) +#endif + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) /* 0x020400 */ + +/* + * Linux 2.4 and higher + * + * No driver private lock + * Use the io_request_lock not cli/sti + * queue task is a simple api without irq forms + */ + +MODULE_AUTHOR ("American Megatrends Inc."); +MODULE_DESCRIPTION ("AMI MegaRAID driver"); + +#define DRIVER_LOCK_T +#define DRIVER_LOCK_INIT(p) +#define DRIVER_LOCK(p) +#define DRIVER_UNLOCK(p) +#define IO_LOCK_T unsigned long io_flags = 0; +#define IO_LOCK spin_lock_irqsave(&io_request_lock,io_flags); +#define IO_UNLOCK spin_unlock_irqrestore(&io_request_lock,io_flags); + +#define queue_task_irq(a,b) queue_task(a,b) +#define queue_task_irq_off(a,b) queue_task(a,b) + +#elif LINUX_VERSION_CODE >= KERNEL_VERSION(2,2,0) /* 0x020200 */ + +/* + * Linux 2.2 and higher + * + * No driver private lock + * Use the io_request_lock not cli/sti + * No pci region api + * queue_task is now a single simple API + */ + +static char kernel_version[] = UTS_RELEASE; +MODULE_AUTHOR ("American Megatrends Inc."); +MODULE_DESCRIPTION ("AMI MegaRAID driver"); + +#define DRIVER_LOCK_T +#define DRIVER_LOCK_INIT(p) +#define DRIVER_LOCK(p) +#define DRIVER_UNLOCK(p) +#define IO_LOCK_T unsigned long io_flags = 0; +#define IO_LOCK spin_lock_irqsave(&io_request_lock,io_flags); +#define IO_UNLOCK spin_unlock_irqrestore(&io_request_lock,io_flags); + +#define pci_free_consistent(a,b,c,d) +#define pci_unmap_single(a,b,c,d) +#define pci_enable_device(x) (0) +#define queue_task_irq(a,b) queue_task(a,b) +#define queue_task_irq_off(a,b) queue_task(a,b) + +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,2,19) /* 0x020219 */ +#define init_MUTEX_LOCKED(x) (*(x)=MUTEX_LOCKED) +#define init_MUTEX(x) (*(x)=MUTEX) +#define DECLARE_WAIT_QUEUE_HEAD(x) struct wait_queue *x = NULL +#endif + + +#else + +/* + * Linux 2.0 macros. Here we have to provide some of our own + * functionality. We also only work little endian 32bit. + * Again no pci_alloc/free api + * IO_LOCK/IO_LOCK_T were never used in 2.0 so now are empty + */ + +#define cpuid 0 +#define DRIVER_LOCK_T long cpu_flags; +#define DRIVER_LOCK_INIT(p) +#define DRIVER_LOCK(p) \ + save_flags(cpu_flags); \ + cli(); +#define DRIVER_UNLOCK(p) \ + restore_flags(cpu_flags); +#define IO_LOCK_T +#define IO_LOCK(p) +#define IO_UNLOCK(p) +#define le32_to_cpu(x) (x) +#define cpu_to_le32(x) (x) + +#define pci_free_consistent(a,b,c,d) +#define pci_unmap_single(a,b,c,d) + +#define init_MUTEX_LOCKED(x) (*(x)=MUTEX_LOCKED) +#define init_MUTEX(x) (*(x)=MUTEX) + +#define pci_enable_device(x) (0) + +/* + * 2.0 lacks spinlocks, iounmap/ioremap + */ + +#define ioremap vremap +#define iounmap vfree + + /* simulate spin locks */ +typedef struct { + volatile char lock; +} spinlock_t; + +#define spin_lock_init(x) { (x)->lock = 0;} +#define spin_lock_irqsave(x,flags) { while ((x)->lock) barrier();\ + (x)->lock=1; save_flags(flags);\ + cli();} +#define spin_unlock_irqrestore(x,flags) { (x)->lock=0; restore_flags(flags);} + +#define DECLARE_WAIT_QUEUE_HEAD(x) struct wait_queue *x = NULL + +#endif + + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) /* 0x020400 */ +#define dma_alloc_consistent pci_alloc_consistent +#define dma_free_consistent pci_free_consistent +#else +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,2,19) /* 0x020219 */ +typedef unsigned long dma_addr_t; +#endif +void *dma_alloc_consistent(void *, size_t, dma_addr_t *); +void dma_free_consistent(void *, size_t, void *, dma_addr_t); +int mega_get_order(int); +int pow_2(int); +#endif + +/* set SERDEBUG to 1 to enable serial debugging */ +#define SERDEBUG 0 +#if SERDEBUG +static void ser_init (void); +static void ser_puts (char *str); +static void ser_putc (char c); +static int ser_printk (const char *fmt, ...); +#endif + +#ifdef CONFIG_PROC_FS +#define COPY_BACK if (offset > megaCfg->procidx) { \ + *eof = TRUE; \ + megaCfg->procidx = 0; \ + megaCfg->procbuf[0] = 0; \ + return 0;} \ + if ((count + offset) > megaCfg->procidx) { \ + count = megaCfg->procidx - offset; \ + *eof = TRUE; } \ + memcpy(page, &megaCfg->procbuf[offset], count); \ + megaCfg->procidx = 0; \ + megaCfg->procbuf[0] = 0; +#endif + +/* + * ================================================================ + * Global variables + *================================================================ + */ + +/* Use "megaraid=skipXX" as LILO option to prohibit driver from scanning + XX scsi id on each channel. Used for Madrona motherboard, where SAF_TE + processor id cannot be scanned */ + +static char *megaraid; +#if LINUX_VERSION_CODE > KERNEL_VERSION(2,1,0) /* 0x20100 */ +#ifdef MODULE +MODULE_PARM (megaraid, "s"); +#endif +#endif +static int skip_id = -1; +static int numCtlrs = 0; +static mega_host_config *megaCtlrs[FC_MAX_CHANNELS] = { 0 }; +static struct proc_dir_entry *mega_proc_dir_entry; + +#if DEBUG +static u32 maxCmdTime = 0; +#endif + +static mega_scb *pLastScb = NULL; +static struct notifier_block mega_notifier = { + megaraid_reboot_notify, + NULL, + 0 +}; + +/* For controller re-ordering */ +struct mega_hbas mega_hbas[MAX_CONTROLLERS]; + +/* + * The File Operations structure for the serial/ioctl interface of the driver + */ +/* For controller re-ordering */ + +static struct file_operations megadev_fops = { + ioctl:megadev_ioctl_entry, + open:megadev_open, + release:megadev_close, +}; + +/* + * Array to structures for storing the information about the controllers. This + * information is sent to the user level applications, when they do an ioctl + * for this information. + */ +static struct mcontroller mcontroller[MAX_CONTROLLERS]; + +/* The current driver version */ +static u32 driver_ver = 117; + +/* major number used by the device for character interface */ +static int major; + +static struct semaphore mimd_ioctl_sem; +static struct semaphore mimd_entry_mtx; + +#if SERDEBUG +volatile static spinlock_t serial_lock; +#endif + +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,4,0) /* 0x20300 */ +static struct proc_dir_entry proc_scsi_megaraid = { + PROC_SCSI_MEGARAID, 8, "megaraid", + S_IFDIR | S_IRUGO | S_IXUGO, 2 +}; +#endif + +#ifdef CONFIG_PROC_FS +extern struct proc_dir_entry proc_root; +#endif + +static char mega_ch_class; /* channels are raid or scsi */ +#define IS_RAID_CH(ch) ( (mega_ch_class >> (ch)) & 0x01 ) + +#if SERDEBUG +static char strbuf[MAX_SERBUF + 1]; + +static void ser_init (void) +{ + unsigned port = COM_BASE; + + outb (0x80, port + 3); + outb (0, port + 1); + /* 9600 Baud, if 19200: outb(6,port) */ + outb (12, port); + outb (3, port + 3); + outb (0, port + 1); +} + +static void ser_puts (char *str) +{ + char *ptr; + + ser_init (); + for (ptr = str; *ptr; ++ptr) + ser_putc (*ptr); +} + +static void ser_putc (char c) +{ + unsigned port = COM_BASE; + + while ((inb (port + 5) & 0x20) == 0) ; + outb (c, port); + if (c == 0x0a) { + while ((inb (port + 5) & 0x20) == 0) ; + outb (0x0d, port); + } +} + +static int ser_printk (const char *fmt, ...) +{ + va_list args; + int i; + long flags; + + spin_lock_irqsave (&serial_lock, flags); + va_start (args, fmt); + i = vsprintf (strbuf, fmt, args); + ser_puts (strbuf); + va_end (args); + spin_unlock_irqrestore (&serial_lock, flags); + + return i; +} + +#define TRACE(a) { ser_printk a;} + +#else +#define TRACE(A) +#endif + +#define TRACE1(a) + +static void callDone (Scsi_Cmnd * SCpnt) +{ + if (SCpnt->result) { + TRACE (("*** %.08lx %.02x <%d.%d.%d> = %x\n", + SCpnt->serial_number, SCpnt->cmnd[0], SCpnt->channel, + SCpnt->target, SCpnt->lun, SCpnt->result)); + } + SCpnt->scsi_done (SCpnt); +} + +/*------------------------------------------------------------------------- + * + * Local functions + * + *-------------------------------------------------------------------------*/ + +/*======================= + * Free a SCB structure + *======================= + */ +static void mega_freeSCB (mega_host_config * megaCfg, mega_scb * pScb) +{ + + mega_scb *pScbtmp; + + if ((pScb == NULL) || (pScb->idx >= 0xFE)) { + return; + } +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + switch (pScb->dma_type) { + case M_RD_DMA_TYPE_NONE: + break; + case M_RD_PTHRU_WITH_BULK_DATA: + pci_unmap_single (megaCfg->dev, pScb->dma_h_bulkdata, + pScb->pthru->dataxferlen, + pScb->dma_direction); + break; + case M_RD_EPTHRU_WITH_BULK_DATA: + pci_unmap_single (megaCfg->dev, pScb->dma_h_bulkdata, + pScb->epthru->dataxferlen, + pScb->dma_direction); + break; + case M_RD_PTHRU_WITH_SGLIST: + { + int count; + for (count = 0; count < pScb->sglist_count; count++) { + pci_unmap_single (megaCfg->dev, + pScb->dma_h_sglist[count], + pScb->sgList[count].length, + pScb->dma_direction); + + } + break; + } + case M_RD_BULK_DATA_ONLY: + pci_unmap_single (megaCfg->dev, + pScb->dma_h_bulkdata, + pScb->iDataSize, pScb->dma_direction); + + break; + case M_RD_SGLIST_ONLY: + pci_unmap_sg (megaCfg->dev, + pScb->SCpnt->request_buffer, + pScb->SCpnt->use_sg, pScb->dma_direction); + break; + default: + break; + } +#endif + + /* Unlink from pending queue */ + if (pScb == megaCfg->qPendingH) { + + if (megaCfg->qPendingH == megaCfg->qPendingT) + megaCfg->qPendingH = megaCfg->qPendingT = NULL; + else + megaCfg->qPendingH = megaCfg->qPendingH->next; + + megaCfg->qPcnt--; + + } else { + for (pScbtmp = megaCfg->qPendingH; pScbtmp; + pScbtmp = pScbtmp->next) { + + if (pScbtmp->next == pScb) { + + pScbtmp->next = pScb->next; + + if (pScb == megaCfg->qPendingT) { + megaCfg->qPendingT = pScbtmp; + } + + megaCfg->qPcnt--; + break; + } + } + } + + /* Link back into free list */ + pScb->state = SCB_FREE; + pScb->SCpnt = NULL; + + if (megaCfg->qFreeH == (mega_scb *) NULL) { + megaCfg->qFreeH = megaCfg->qFreeT = pScb; + } else { + megaCfg->qFreeT->next = pScb; + megaCfg->qFreeT = pScb; + } + + megaCfg->qFreeT->next = NULL; + megaCfg->qFcnt++; + +} + +/*=========================== + * Allocate a SCB structure + *=========================== + */ +static mega_scb *mega_allocateSCB (mega_host_config * megaCfg, Scsi_Cmnd * SCpnt) +{ + mega_scb *pScb; + + /* Unlink command from Free List */ + if ((pScb = megaCfg->qFreeH) != NULL) { + megaCfg->qFreeH = pScb->next; + megaCfg->qFcnt--; + + pScb->isrcount = jiffies; + pScb->next = NULL; + pScb->state = SCB_ACTIVE; + pScb->SCpnt = SCpnt; + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + pScb->dma_type = M_RD_DMA_TYPE_NONE; +#endif + + return pScb; + } + + printk (KERN_WARNING "Megaraid: Could not allocate free SCB!!!\n"); + + return NULL; +} + +/* Run through the list of completed requests and finish it */ +static void mega_rundoneq (mega_host_config * megaCfg) +{ + Scsi_Cmnd *SCpnt; + + while ((SCpnt = megaCfg->qCompletedH) != NULL) { + megaCfg->qCompletedH = (Scsi_Cmnd *) SCpnt->host_scribble; + megaCfg->qCcnt--; + + SCpnt->host_scribble = (unsigned char *) NULL; /* XC : sep 14 */ + /* Callback */ + callDone (SCpnt); + } + + megaCfg->qCompletedH = megaCfg->qCompletedT = NULL; +} + +/* + * Runs through the list of pending requests + * Assumes that mega_lock spin_lock has been acquired. + */ +static int mega_runpendq (mega_host_config * megaCfg) +{ + mega_scb *pScb; + int rc; + + /* Issue any pending commands to the card */ + for (pScb = megaCfg->qPendingH; pScb; pScb = pScb->next) { + if (pScb->state == SCB_ACTIVE) { + if ((rc = + megaIssueCmd (megaCfg, pScb->mboxData, pScb, 1)) == -1) + return rc; + } + } + return 0; +} + +/* Add command to the list of completed requests */ + +static void mega_cmd_done (mega_host_config * megaCfg, mega_scb * pScb, int status) +{ + int islogical; + Scsi_Cmnd *SCpnt; + mega_passthru *pthru; + mega_ext_passthru *epthru; + mega_mailbox *mbox; + struct scatterlist *sgList; + u8 c; + + if (pScb == NULL) { + TRACE (("NULL pScb in mega_cmd_done!")); + printk(KERN_CRIT "NULL pScb in mega_cmd_done!"); + } + + SCpnt = pScb->SCpnt; + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + pthru = pScb->pthru; + epthru = pScb->epthru; +#else + pthru = &pScb->pthru; + epthru = &pScb->epthru; +#endif + + mbox = (mega_mailbox *) & pScb->mboxData; + + if (SCpnt == NULL) { + TRACE (("NULL SCpnt in mega_cmd_done!")); + TRACE (("pScb->idx = ", pScb->idx)); + TRACE (("pScb->state = ", pScb->state)); + TRACE (("pScb->state = ", pScb->state)); + panic(KERN_ERR "megaraid:Problem...!\n"); + } + + islogical = (SCpnt->channel == megaCfg->host->max_channel); + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + /* Special Case to handle PassThrough->XferAddrress > 4GB */ + switch (SCpnt->cmnd[0]) { + case INQUIRY: + case READ_CAPACITY: + memcpy (SCpnt->request_buffer, + pScb->bounce_buffer, SCpnt->request_bufflen); + break; + } +#endif + + mega_freeSCB (megaCfg, pScb); + + /* + * Do not return the presence of hard disk on the channel so, inquiry + * sent, and returned data==hard disk or removable hard disk and not + * logical, request should return failure! - PJ + */ +#if 0 + if (SCpnt->cmnd[0] == INQUIRY && ((((u_char *) SCpnt->request_buffer)[0] & 0x1F) == TYPE_DISK) && !islogical) { + status = 0xF0; + } +#endif + if (SCpnt->cmnd[0] == INQUIRY && !islogical) { + if ( SCpnt->use_sg ) { + sgList = (struct scatterlist *)SCpnt->request_buffer; + memcpy(&c, sgList[0].address, 0x1); + } else { + memcpy(&c, SCpnt->request_buffer, 0x1); + } +#if 0 + if( (c & 0x1F ) == TYPE_DISK ) { + status = 0xF0; + } +#endif + if( IS_RAID_CH(SCpnt->channel) && ((c & 0x1F ) == TYPE_DISK) ) { + status = 0xF0; + } + } + + + /* clear result; otherwise, success returns corrupt value */ + SCpnt->result = 0; + + if ((SCpnt->cmnd[0] & M_RD_IOCTL_CMD)) { /* i.e. ioctl cmd such as M_RD_IOCTL_CMD, M_RD_IOCTL_CMD_NEW of megamgr */ + switch (status) { + case 2: + case 0xF0: + case 0xF4: + SCpnt->result = (DID_BAD_TARGET << 16) | status; + break; + default: + SCpnt->result |= status; + } /*end of switch */ + } else { + /* Convert MegaRAID status to Linux error code */ + switch (status) { + case 0x00: /* SUCCESS , i.e. SCSI_STATUS_GOOD */ + SCpnt->result |= (DID_OK << 16); + break; + + case 0x02: /* ERROR_ABORTED, i.e. SCSI_STATUS_CHECK_CONDITION */ + + /*set sense_buffer and result fields */ + if (mbox->cmd == MEGA_MBOXCMD_PASSTHRU) { + memcpy (SCpnt->sense_buffer, pthru->reqsensearea, 14); + } else if (mbox->cmd == MEGA_MBOXCMD_EXTPASSTHRU) { + SCpnt->result = (DRIVER_SENSE << 24) | (DID_OK << 16) | (CHECK_CONDITION < 1); + memcpy( + SCpnt->sense_buffer, + epthru->reqsensearea, 14 + ); + SCpnt->result = (DRIVER_SENSE << 24) | (DID_OK << 16) | (CHECK_CONDITION < 1); + /*SCpnt->result = + (DRIVER_SENSE << 24) | + (DID_ERROR << 16) | status;*/ + } else { + SCpnt->sense_buffer[0] = 0x70; + SCpnt->sense_buffer[2] = ABORTED_COMMAND; + SCpnt->result |= (CHECK_CONDITION << 1); + } + break; + + case 0x08: /* ERR_DEST_DRIVE_FAILED, i.e. SCSI_STATUS_BUSY */ + SCpnt->result |= (DID_BUS_BUSY << 16) | status; + break; + + default: + SCpnt->result |= (DID_BAD_TARGET << 16) | status; + break; + } + } + + /* Add Scsi_Command to end of completed queue */ + if (megaCfg->qCompletedH == NULL) { + megaCfg->qCompletedH = megaCfg->qCompletedT = SCpnt; + } else { + megaCfg->qCompletedT->host_scribble = (unsigned char *) SCpnt; + megaCfg->qCompletedT = SCpnt; + } + + megaCfg->qCompletedT->host_scribble = (unsigned char *) NULL; + megaCfg->qCcnt++; +} + +/*------------------------------------------------------------------- + * + * Build a SCB from a Scsi_Cmnd + * + * Returns a SCB pointer, or NULL + * If NULL is returned, the scsi_done function MUST have been called + * + *-------------------------------------------------------------------*/ + +static mega_scb *mega_build_cmd (mega_host_config * megaCfg, Scsi_Cmnd * SCpnt) +{ + mega_scb *pScb; + mega_mailbox *mbox; + mega_passthru *pthru; + mega_ext_passthru *epthru; + long seg; + char islogical; + char lun = SCpnt->lun; + + if ((SCpnt->cmnd[0] == MEGADEVIOC)) + return megadev_doioctl (megaCfg, SCpnt); + + if ((SCpnt->cmnd[0] == M_RD_IOCTL_CMD) + || (SCpnt->cmnd[0] == M_RD_IOCTL_CMD_NEW)) +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,3,0) + return mega_ioctl (megaCfg, SCpnt); /* Handle IOCTL command */ +#else + { + printk(KERN_WARNING "megaraid ioctl: older interface - " + "not supported.\n"); + return NULL; + } +#endif + + islogical = (IS_RAID_CH(SCpnt->channel) && /* virtual ch is raid - AM */ + (SCpnt->channel == megaCfg->host->max_channel)); + + if ( ! megaCfg->support_ext_cdb ) { + if (!islogical && lun != 0) { + SCpnt->result = (DID_BAD_TARGET << 16); + callDone (SCpnt); + return NULL; + } + } + + if (!islogical && SCpnt->target == skip_id) { + SCpnt->result = (DID_BAD_TARGET << 16); + callDone (SCpnt); + return NULL; + } + + /* + * Return error for LUN > 7. The way we calculate logical drive number + * requires it to be so. + */ + if( lun > 7 ) { + SCpnt->result = (DID_BAD_TARGET << 16); + callDone (SCpnt); + return NULL; + } + + if (islogical) { + + lun = (SCpnt->target * 8) + lun; + + if(lun >= megaCfg->numldrv ) { + SCpnt->result = (DID_BAD_TARGET << 16); + callDone (SCpnt); + return NULL; + } + + /* + * If we have a logical drive with boot enabled, project it first + */ + if( megaCfg->boot_ldrv_enabled ) { + if( lun == 0 ) { + lun = megaCfg->boot_ldrv; + } + else { + if( lun <= megaCfg->boot_ldrv ) { + lun--; + } + } + } + } + /*----------------------------------------------------- + * + * Logical drive commands + * + *-----------------------------------------------------*/ + if (islogical) { + switch (SCpnt->cmnd[0]) { + case TEST_UNIT_READY: + memset (SCpnt->request_buffer, 0, SCpnt->request_bufflen); + SCpnt->result = (DID_OK << 16); + callDone (SCpnt); + return NULL; + + case MODE_SENSE: + memset (SCpnt->request_buffer, 0, SCpnt->cmnd[4]); + SCpnt->result = (DID_OK << 16); + callDone (SCpnt); + return NULL; + + case READ_CAPACITY: + case INQUIRY: + /* Allocate a SCB and initialize passthru */ + if ((pScb = mega_allocateSCB (megaCfg, SCpnt)) == NULL) { + SCpnt->result = (DID_ERROR << 16); + callDone (SCpnt); + return NULL; + } +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + pthru = pScb->pthru; +#else + pthru = &pScb->pthru; +#endif + + mbox = (mega_mailbox *) & pScb->mboxData; + memset (mbox, 0, sizeof (pScb->mboxData)); + memset (pthru, 0, sizeof (mega_passthru)); + pthru->timeout = 0; + pthru->ars = 1; + pthru->reqsenselen = 14; + pthru->islogical = 1; + pthru->logdrv = lun; + pthru->cdblen = SCpnt->cmd_len; + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + /*Not sure about the direction */ + pScb->dma_direction = PCI_DMA_BIDIRECTIONAL; + pScb->dma_type = M_RD_PTHRU_WITH_BULK_DATA; + +#if 0 +/* Normal Code w/o the need for bounce buffer */ + pScb->dma_h_bulkdata + = pci_map_single (megaCfg->dev, + SCpnt->request_buffer, + SCpnt->request_bufflen, + pScb->dma_direction); + + pthru->dataxferaddr = pScb->dma_h_bulkdata; +#else +/* Special Code to use bounce buffer for READ_CAPA/INQ */ + pthru->dataxferaddr = pScb->dma_bounce_buffer; + pScb->dma_type = M_RD_DMA_TYPE_NONE; +#endif + +#else + pthru->dataxferaddr = + virt_to_bus (SCpnt->request_buffer); +#endif + + pthru->dataxferlen = SCpnt->request_bufflen; + memcpy (pthru->cdb, SCpnt->cmnd, SCpnt->cmd_len); + + /* Initialize mailbox area */ + mbox->cmd = MEGA_MBOXCMD_PASSTHRU; + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + mbox->xferaddr = pScb->dma_passthruhandle64; + TRACE1 (("M_RD_PTHRU_WITH_BULK_DATA Enabled \n")); +#else + mbox->xferaddr = virt_to_bus (pthru); +#endif + return pScb; + + case READ_6: + case WRITE_6: + case READ_10: + case WRITE_10: + /* Allocate a SCB and initialize mailbox */ + if ((pScb = mega_allocateSCB (megaCfg, SCpnt)) == NULL) { + SCpnt->result = (DID_ERROR << 16); + callDone (SCpnt); + return NULL; + } + mbox = (mega_mailbox *) & pScb->mboxData; + + memset (mbox, 0, sizeof (pScb->mboxData)); + mbox->logdrv = lun; + + if (megaCfg->flag & BOARD_64BIT) { + mbox->cmd = (*SCpnt->cmnd == READ_6 + || *SCpnt->cmnd == + READ_10) ? MEGA_MBOXCMD_LREAD64 : + MEGA_MBOXCMD_LWRITE64; + } else { + mbox->cmd = (*SCpnt->cmnd == READ_6 + || *SCpnt->cmnd == + READ_10) ? MEGA_MBOXCMD_LREAD : + MEGA_MBOXCMD_LWRITE; + } + + /* 6-byte */ + if (*SCpnt->cmnd == READ_6 || *SCpnt->cmnd == WRITE_6) { + mbox->numsectors = (u32) SCpnt->cmnd[4]; + mbox->lba = + ((u32) SCpnt->cmnd[1] << 16) | + ((u32) SCpnt->cmnd[2] << 8) | + (u32) SCpnt->cmnd[3]; + mbox->lba &= 0x1FFFFF; + + if (*SCpnt->cmnd == READ_6) { + megaCfg->nReads[(int) lun]++; + megaCfg->nReadBlocks[(int) lun] += + mbox->numsectors; + } else { + megaCfg->nWrites[(int) lun]++; + megaCfg->nWriteBlocks[(int) lun] += + mbox->numsectors; + } + } + + /* 10-byte */ + if (*SCpnt->cmnd == READ_10 || *SCpnt->cmnd == WRITE_10) { + mbox->numsectors = + (u32) SCpnt->cmnd[8] | + ((u32) SCpnt->cmnd[7] << 8); + mbox->lba = + ((u32) SCpnt->cmnd[2] << 24) | + ((u32) SCpnt->cmnd[3] << 16) | + ((u32) SCpnt->cmnd[4] << 8) | + (u32) SCpnt->cmnd[5]; + + if (*SCpnt->cmnd == READ_10) { + megaCfg->nReads[(int) lun]++; + megaCfg->nReadBlocks[(int) lun] += + mbox->numsectors; + } else { + megaCfg->nWrites[(int) lun]++; + megaCfg->nWriteBlocks[(int) lun] += + mbox->numsectors; + } + } + + /* 12-byte */ + if (*SCpnt->cmnd == READ_12 || *SCpnt->cmnd == WRITE_12) { + mbox->lba = + ((u32) SCpnt->cmnd[2] << 24) | + ((u32) SCpnt->cmnd[3] << 16) | + ((u32) SCpnt->cmnd[4] << 8) | + (u32) SCpnt->cmnd[5]; + + mbox->numsectors = + ((u32) SCpnt->cmnd[6] << 24) | + ((u32) SCpnt->cmnd[7] << 16) | + ((u32) SCpnt->cmnd[8] << 8) | + (u32) SCpnt->cmnd[9]; + + if (*SCpnt->cmnd == READ_12) { + megaCfg->nReads[(int) lun]++; + megaCfg->nReadBlocks[(int) lun] += + mbox->numsectors; + } else { + megaCfg->nWrites[(int) lun]++; + megaCfg->nWriteBlocks[(int) lun] += + mbox->numsectors; + } + } + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + if (*SCpnt->cmnd == READ_6 || *SCpnt->cmnd == READ_10 + || *SCpnt->cmnd == READ_12) { + pScb->dma_direction = PCI_DMA_FROMDEVICE; + } else { /*WRITE_6 or WRITE_10 */ + pScb->dma_direction = PCI_DMA_TODEVICE; + } +#endif + + /* Calculate Scatter-Gather info */ + mbox->numsgelements = mega_build_sglist (megaCfg, pScb, + (u32 *)&mbox->xferaddr, (u32 *)&seg); + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + pScb->iDataSize = seg; + + if (mbox->numsgelements) { + pScb->dma_type = M_RD_SGLIST_ONLY; + TRACE1 (("M_RD_SGLIST_ONLY Enabled \n")); + } else { + pScb->dma_type = M_RD_BULK_DATA_ONLY; + TRACE1 (("M_RD_BULK_DATA_ONLY Enabled \n")); + } +#endif + + return pScb; + default: + SCpnt->result = (DID_BAD_TARGET << 16); + callDone (SCpnt); + return NULL; + } + } + /*----------------------------------------------------- + * + * Passthru drive commands + * + *-----------------------------------------------------*/ + else { + /* Allocate a SCB and initialize passthru */ + if ((pScb = mega_allocateSCB (megaCfg, SCpnt)) == NULL) { + SCpnt->result = (DID_ERROR << 16); + callDone (SCpnt); + return NULL; + } + + mbox = (mega_mailbox *) pScb->mboxData; + memset (mbox, 0, sizeof (pScb->mboxData)); + + if ( megaCfg->support_ext_cdb && SCpnt->cmd_len > 10 ) { + epthru = mega_prepare_extpassthru(megaCfg, pScb, SCpnt); + mbox->cmd = MEGA_MBOXCMD_EXTPASSTHRU; +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + mbox->xferaddr = pScb->dma_ext_passthruhandle64; + + if(epthru->numsgelements) { + pScb->dma_type = M_RD_PTHRU_WITH_SGLIST; + } else { + pScb->dma_type = M_RD_EPTHRU_WITH_BULK_DATA; + } +#else + mbox->xferaddr = virt_to_bus(epthru); +#endif + } + else { + pthru = mega_prepare_passthru(megaCfg, pScb, SCpnt); + + /* Initialize mailbox */ + mbox->cmd = MEGA_MBOXCMD_PASSTHRU; +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + mbox->xferaddr = pScb->dma_passthruhandle64; + + if (pthru->numsgelements) { + pScb->dma_type = M_RD_PTHRU_WITH_SGLIST; + } else { + pScb->dma_type = M_RD_PTHRU_WITH_BULK_DATA; + } +#else + mbox->xferaddr = virt_to_bus(pthru); +#endif + } + return pScb; + } + return NULL; +} + +static mega_passthru * +mega_prepare_passthru(mega_host_config *megacfg, mega_scb *scb, Scsi_Cmnd *sc) +{ + mega_passthru *pthru; + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + pthru = scb->pthru; +#else + pthru = &scb->pthru; +#endif + memset (pthru, 0, sizeof (mega_passthru)); + + /* set adapter timeout value to 10 min. for tape drive */ + /* 0=6sec/1=60sec/2=10min/3=3hrs */ + pthru->timeout = 2; + pthru->ars = 1; + pthru->reqsenselen = 14; + pthru->islogical = 0; + pthru->channel = (megacfg->flag & BOARD_40LD) ? 0 : sc->channel; + pthru->target = (megacfg->flag & BOARD_40LD) ? + (sc->channel << 4) | sc->target : sc->target; + pthru->cdblen = sc->cmd_len; + pthru->logdrv = sc->lun; + + memcpy (pthru->cdb, sc->cmnd, sc->cmd_len); + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + /* Not sure about the direction */ + scb->dma_direction = PCI_DMA_BIDIRECTIONAL; + + /* Special Code for Handling READ_CAPA/ INQ using bounce buffers */ + switch (sc->cmnd[0]) { + case INQUIRY: + case READ_CAPACITY: + pthru->numsgelements = 0; + pthru->dataxferaddr = scb->dma_bounce_buffer; + pthru->dataxferlen = sc->request_bufflen; + break; + default: + pthru->numsgelements = + mega_build_sglist( + megacfg, scb, (u32 *)&pthru->dataxferaddr, + (u32 *)&pthru->dataxferlen + ); + break; + } +#else + pthru->numsgelements = + mega_build_sglist( + megacfg, scb, (u32 *)&pthru->dataxferaddr, + (u32 *)&pthru->dataxferlen + ); +#endif + return pthru; +} + +static mega_ext_passthru * +mega_prepare_extpassthru(mega_host_config *megacfg, mega_scb *scb, Scsi_Cmnd *sc) +{ + mega_ext_passthru *epthru; + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + epthru = scb->epthru; +#else + epthru = &scb->epthru; +#endif + memset(epthru, 0, sizeof(mega_ext_passthru)); + + /* set adapter timeout value to 10 min. for tape drive */ + /* 0=6sec/1=60sec/2=10min/3=3hrs */ + epthru->timeout = 2; + epthru->ars = 1; + epthru->reqsenselen = 14; + epthru->islogical = 0; + epthru->channel = (megacfg->flag & BOARD_40LD) ? 0 : sc->channel; + epthru->target = (megacfg->flag & BOARD_40LD) ? + (sc->channel << 4) | sc->target : sc->target; + epthru->cdblen = sc->cmd_len; + epthru->logdrv = sc->lun; + + memcpy(epthru->cdb, sc->cmnd, sc->cmd_len); + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + /* Not sure about the direction */ + scb->dma_direction = PCI_DMA_BIDIRECTIONAL; + + /* Special Code for Handling READ_CAPA/ INQ using bounce buffers */ + switch (sc->cmnd[0]) { + case INQUIRY: + case READ_CAPACITY: + epthru->numsgelements = 0; + epthru->dataxferaddr = scb->dma_bounce_buffer; + epthru->dataxferlen = sc->request_bufflen; + break; + default: + epthru->numsgelements = + mega_build_sglist( + megacfg, scb, (u32 *)&epthru->dataxferaddr, + (u32 *)&epthru->dataxferlen + ); + break; + } +#else + epthru->numsgelements = + mega_build_sglist( + megacfg, scb, (u32 *)&epthru->dataxferaddr, + (u32 *)&epthru->dataxferlen + ); +#endif + return epthru; +} + +/* Handle Driver Level IOCTLs + * Return value of 0 indicates this function could not handle , so continue + * processing +*/ + +static int mega_driver_ioctl (mega_host_config * megaCfg, Scsi_Cmnd * SCpnt) +{ + unsigned char *data = (unsigned char *) SCpnt->request_buffer; + mega_driver_info driver_info; + + /* If this is not our command dont do anything */ + if (SCpnt->cmnd[0] != M_RD_DRIVER_IOCTL_INTERFACE) + return 0; + + switch (SCpnt->cmnd[1]) { + case GET_DRIVER_INFO: + if (SCpnt->request_bufflen < sizeof (driver_info)) { + SCpnt->result = DID_BAD_TARGET << 16; + callDone (SCpnt); + return 1; + } + + driver_info.size = sizeof (driver_info) - sizeof (int); + driver_info.version = MEGARAID_IOCTL_VERSION; + memcpy (data, &driver_info, sizeof (driver_info)); + break; + default: + SCpnt->result = DID_BAD_TARGET << 16; + } + + callDone (SCpnt); + return 1; +} + +static void inline set_mbox_xfer_addr (mega_host_config * megaCfg, mega_scb * pScb, + mega_ioctl_mbox * mbox, u32 direction) +{ + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + switch (direction) { + case TO_DEVICE: + pScb->dma_direction = PCI_DMA_TODEVICE; + break; + case FROM_DEVICE: + pScb->dma_direction = PCI_DMA_FROMDEVICE; + break; + case FROMTO_DEVICE: + pScb->dma_direction = PCI_DMA_BIDIRECTIONAL; + break; + } + + pScb->dma_h_bulkdata + = pci_map_single (megaCfg->dev, + pScb->buff_ptr, + pScb->iDataSize, pScb->dma_direction); + mbox->xferaddr = pScb->dma_h_bulkdata; + pScb->dma_type = M_RD_BULK_DATA_ONLY; + TRACE1 (("M_RD_BULK_DATA_ONLY Enabled \n")); +#else + mbox->xferaddr = virt_to_bus (pScb->buff_ptr); +#endif +} + +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,3,0) + +/*-------------------------------------------------------------------- + * build RAID commands for controller, passed down through ioctl() + *--------------------------------------------------------------------*/ +static mega_scb *mega_ioctl (mega_host_config * megaCfg, Scsi_Cmnd * SCpnt) +{ + mega_scb *pScb; + mega_ioctl_mbox *mbox; + mega_mailbox *mailbox; + mega_passthru *pthru; + u8 *mboxdata; + long seg, i = 0; + unsigned char *data = (unsigned char *) SCpnt->request_buffer; + + if ((pScb = mega_allocateSCB (megaCfg, SCpnt)) == NULL) { + SCpnt->result = (DID_ERROR << 16); + callDone (SCpnt); + return NULL; + } + pthru = &pScb->pthru; + + mboxdata = (u8 *) & pScb->mboxData; + mbox = (mega_ioctl_mbox *) & pScb->mboxData; + mailbox = (mega_mailbox *) & pScb->mboxData; + memset (mailbox, 0, sizeof (pScb->mboxData)); + + if (data[0] == 0x03) { /* passthrough command */ + unsigned char cdblen = data[2]; + memset (pthru, 0, sizeof (mega_passthru)); + pthru->islogical = (data[cdblen + 3] & 0x80) ? 1 : 0; + pthru->timeout = data[cdblen + 3] & 0x07; + pthru->reqsenselen = 14; + pthru->ars = (data[cdblen + 3] & 0x08) ? 1 : 0; + pthru->logdrv = data[cdblen + 4]; + pthru->channel = data[cdblen + 5]; + pthru->target = data[cdblen + 6]; + pthru->cdblen = cdblen; + memcpy (pthru->cdb, &data[3], cdblen); + + mailbox->cmd = MEGA_MBOXCMD_PASSTHRU; + + + pthru->numsgelements = mega_build_sglist (megaCfg, pScb, + (u32 *) & pthru-> + dataxferaddr, + (u32 *) & pthru-> + dataxferlen); + + mailbox->xferaddr = virt_to_bus (pthru); + + for (i = 0; i < (SCpnt->request_bufflen - cdblen - 7); i++) { + data[i] = data[i + cdblen + 7]; + } + return pScb; + } + /* else normal (nonpassthru) command */ + +#if LINUX_VERSION_CODE > KERNEL_VERSION(2,0,24) /*0x020024 */ + /* + *usage of the function copy from user is used in case of data more than + *4KB.This is used only with adapters which supports more than 8 logical + * drives.This feature is disabled on kernels earlier or same as 2.0.36 + * as the uaccess.h file is not available with those kernels. + */ + + if (SCpnt->cmnd[0] == M_RD_IOCTL_CMD_NEW) { + /* use external data area for large xfers */ + /* If cmnd[0] is set to M_RD_IOCTL_CMD_NEW then * + * cmnd[4..7] = external user buffer * + * cmnd[8..11] = length of buffer * + * */ + char *user_area = (char *)*((u32*)&SCpnt->cmnd[4]); + u32 xfer_size = *((u32 *) & SCpnt->cmnd[8]); + switch (data[0]) { + case FW_FIRE_WRITE: + case FW_FIRE_FLASH: + if ((ulong) user_area & (MMUPAGE_SIZE - 1)) { + printk + ("megaraid:user address not aligned on 4K boundary.Error.\n"); + SCpnt->result = (DID_ERROR << 16); + callDone (SCpnt); + return NULL; + } + break; + default: + break; + } + + if (!(pScb->buff_ptr = kmalloc (xfer_size, GFP_KERNEL))) { + printk + ("megaraid: Insufficient mem for M_RD_IOCTL_CMD_NEW.\n"); + SCpnt->result = (DID_ERROR << 16); + callDone (SCpnt); + return NULL; + } + + copy_from_user (pScb->buff_ptr, user_area, xfer_size); + pScb->iDataSize = xfer_size; + + switch (data[0]) { + case DCMD_FC_CMD: + switch (data[1]) { + case DCMD_FC_READ_NVRAM_CONFIG: + case DCMD_GET_DISK_CONFIG: + { + if ((ulong) pScb-> + buff_ptr & (PAGE_SIZE - 1)) { + printk + ("megaraid:user address not sufficient Error.\n"); + SCpnt->result = + (DID_ERROR << 16); + callDone (SCpnt); + return NULL; + } + + /*building SG list */ + mega_build_kernel_sg (pScb->buff_ptr, + xfer_size, + pScb, mbox); + break; + } + default: + break; + } /*switch (data[1]) */ + break; + } + + } +#endif + + mbox->cmd = data[0]; + mbox->channel = data[1]; + mbox->param = data[2]; + mbox->pad[0] = data[3]; + mbox->logdrv = data[4]; + + if (SCpnt->cmnd[0] == M_RD_IOCTL_CMD_NEW) { + switch (data[0]) { + case FW_FIRE_WRITE: + mbox->cmd = FW_FIRE_WRITE; + mbox->channel = data[1]; /* Current Block Number */ + set_mbox_xfer_addr (megaCfg, pScb, mbox, TO_DEVICE); + mbox->numsgelements = 0; + break; + case FW_FIRE_FLASH: + mbox->cmd = FW_FIRE_FLASH; + mbox->channel = data[1] | 0x80; /* Origin */ + set_mbox_xfer_addr (megaCfg, pScb, mbox, TO_DEVICE); + mbox->numsgelements = 0; + break; + case DCMD_FC_CMD: + *(mboxdata + 0) = data[0]; /*mailbox byte 0: DCMD_FC_CMD */ + *(mboxdata + 2) = data[1]; /*sub command */ + switch (data[1]) { + case DCMD_FC_READ_NVRAM_CONFIG: + case DCMD_FC_READ_NVRAM_CONFIG_64: + /* number of elements in SG list */ + *(mboxdata + 3) = mbox->numsgelements; + if (megaCfg->flag & BOARD_64BIT) + *(mboxdata + 2) = + DCMD_FC_READ_NVRAM_CONFIG_64; + break; + case DCMD_WRITE_CONFIG: + case DCMD_WRITE_CONFIG_64: + if (megaCfg->flag & BOARD_64BIT) + *(mboxdata + 2) = DCMD_WRITE_CONFIG_64; + set_mbox_xfer_addr (megaCfg, pScb, mbox, + TO_DEVICE); + mbox->numsgelements = 0; + break; + case DCMD_GET_DISK_CONFIG: + case DCMD_GET_DISK_CONFIG_64: + if (megaCfg->flag & BOARD_64BIT) + *(mboxdata + 2) = + DCMD_GET_DISK_CONFIG_64; + *(mboxdata + 3) = data[2]; /*number of elements in SG list */ + /*nr of elements in SG list */ + *(mboxdata + 4) = mbox->numsgelements; + break; + case DCMD_DELETE_LOGDRV: + case DCMD_DELETE_DRIVEGROUP: + case NC_SUBOP_ENQUIRY3: + *(mboxdata + 3) = data[2]; + set_mbox_xfer_addr (megaCfg, pScb, mbox, + FROMTO_DEVICE); + mbox->numsgelements = 0; + break; + case DCMD_CHANGE_LDNO: + case DCMD_CHANGE_LOOPID: + *(mboxdata + 3) = data[2]; + *(mboxdata + 4) = data[3]; + set_mbox_xfer_addr (megaCfg, pScb, mbox, + TO_DEVICE); + mbox->numsgelements = 0; + break; + default: + set_mbox_xfer_addr (megaCfg, pScb, mbox, + FROMTO_DEVICE); + mbox->numsgelements = 0; + break; + } /*switch */ + break; + default: + set_mbox_xfer_addr (megaCfg, pScb, mbox, FROMTO_DEVICE); + mbox->numsgelements = 0; + break; + } + } else { + + mbox->numsgelements = mega_build_sglist (megaCfg, pScb, + (u32 *) & mbox-> + xferaddr, + (u32 *) & seg); + + /* Handling some of the fw special commands */ + switch (data[0]) { + case 6: /* START_DEV */ + mbox->xferaddr = *((u32 *) & data[i + 6]); + break; + default: + break; + } + + for (i = 0; i < (SCpnt->request_bufflen - 6); i++) { + data[i] = data[i + 6]; + } + } + + return (pScb); +} + + +static void mega_build_kernel_sg (char *barea, ulong xfersize, mega_scb * pScb, mega_ioctl_mbox * mbox) +{ + ulong i, buffer_area, len, end, end_page, x, idx = 0; + + buffer_area = (ulong) barea; + i = buffer_area; + end = buffer_area + xfersize; + end_page = (end) & ~(PAGE_SIZE - 1); + + do { + len = PAGE_SIZE - (i % PAGE_SIZE); + x = pScb->sgList[idx].address = + virt_to_bus ((volatile void *) i); + pScb->sgList[idx].length = len; + i += len; + idx++; + } while (i < end_page); + + if ((end - i) < 0) { + printk ("megaraid:Error in user address\n"); + } + + if (end - i) { + pScb->sgList[idx].address = virt_to_bus ((volatile void *) i); + pScb->sgList[idx].length = end - i; + idx++; + } + mbox->xferaddr = virt_to_bus (pScb->sgList); + mbox->numsgelements = idx; +} +#endif + + +#if DEBUG +static unsigned int cum_time = 0; +static unsigned int cum_time_cnt = 0; + +static void showMbox (mega_scb * pScb) +{ + mega_mailbox *mbox; + + if (pScb == NULL) + return; + + mbox = (mega_mailbox *) pScb->mboxData; + printk ("%u cmd:%x id:%x #scts:%x lba:%x addr:%x logdrv:%x #sg:%x\n", + pScb->SCpnt->pid, + mbox->cmd, mbox->cmdid, mbox->numsectors, + mbox->lba, mbox->xferaddr, mbox->logdrv, mbox->numsgelements); +} + +#endif + +/*-------------------------------------------------------------------- + * Interrupt service routine + *--------------------------------------------------------------------*/ +static void megaraid_isr (int irq, void *devp, struct pt_regs *regs) +{ + IO_LOCK_T + mega_host_config * megaCfg; + u_char byte, idx, sIdx, tmpBox[MAILBOX_SIZE]; + u32 dword = 0; + mega_mailbox *mbox; + mega_scb *pScb; + u_char qCnt, qStatus; + u_char completed[MAX_FIRMWARE_STATUS]; + Scsi_Cmnd *SCpnt; + + megaCfg = (mega_host_config *) devp; + mbox = (mega_mailbox *) tmpBox; + + if (megaCfg->host->irq == irq) { + if (megaCfg->flag & IN_ISR) { + TRACE (("ISR called reentrantly!!\n")); + printk ("ISR called reentrantly!!\n"); + } + megaCfg->flag |= IN_ISR; + + if (mega_busyWaitMbox (megaCfg)) { + printk (KERN_WARNING "Error: mailbox busy in isr!\n"); + } + + /* Check if a valid interrupt is pending */ + if (megaCfg->flag & BOARD_QUARTZ) { + dword = RDOUTDOOR (megaCfg); + if (dword != 0x10001234) { + /* Spurious interrupt */ + megaCfg->flag &= ~IN_ISR; + return; + } + } else { + byte = READ_PORT (megaCfg->host->io_port, INTR_PORT); + if ((byte & VALID_INTR_BYTE) == 0) { + /* Spurious interrupt */ + megaCfg->flag &= ~IN_ISR; + return; + } + WRITE_PORT (megaCfg->host->io_port, INTR_PORT, byte); + } + + for (idx = 0; idx < MAX_FIRMWARE_STATUS; idx++) + completed[idx] = 0; + + IO_LOCK; + + megaCfg->nInterrupts++; + qCnt = 0xff; + while ((qCnt = megaCfg->mbox->numstatus) == 0xFF) ; + + qStatus = 0xff; + while ((qStatus = megaCfg->mbox->status) == 0xFF) ; + + /* Get list of completed requests */ + for (idx = 0; idx < qCnt; idx++) { + while ((sIdx = megaCfg->mbox->completed[idx]) == 0xFF) { + printk ("p"); + } + completed[idx] = sIdx; + sIdx = 0xFF; + } + + if (megaCfg->flag & BOARD_QUARTZ) { + WROUTDOOR (megaCfg, dword); + /* Acknowledge interrupt */ +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + /* In this case mbox contains physical address */ +#if 0 + WRINDOOR (megaCfg, megaCfg->adjdmahandle64 | 0x2); +#else + WRINDOOR (megaCfg, 0x2); +#endif + +#else + +#if 0 + WRINDOOR (megaCfg, virt_to_bus (megaCfg->mbox) | 0x2); +#else + WRINDOOR (megaCfg, 0x2); +#endif + +#endif + +#if 0 + while (RDINDOOR (megaCfg) & 0x02) ; +#endif + } else { + CLEAR_INTR (megaCfg->host->io_port); + } + +#if DEBUG + if (qCnt >= MAX_FIRMWARE_STATUS) { + printk ("megaraid_isr: cmplt=%d ", qCnt); + } +#endif + + for (idx = 0; idx < qCnt; idx++) { + sIdx = completed[idx]; + if ((sIdx > 0) && (sIdx <= MAX_COMMANDS)) { + pScb = &megaCfg->scbList[sIdx - 1]; + + /* ASSERT(pScb->state == SCB_ISSUED); */ + +#if DEBUG + if (((jiffies) - pScb->isrcount) > maxCmdTime) { + maxCmdTime = (jiffies) - pScb->isrcount; + printk + ("megaraid_isr : cmd time = %u\n", + maxCmdTime); + } +#endif + /* + * Assuming that the scsi command, for which + * an abort request was received earlier, has + * completed. + */ + if (pScb->state == SCB_ABORTED) { + SCpnt = pScb->SCpnt; + } + if (pScb->state == SCB_RESET) { + SCpnt = pScb->SCpnt; + mega_freeSCB (megaCfg, pScb); + SCpnt->result = (DID_RESET << 16); + if (megaCfg->qCompletedH == NULL) { + megaCfg->qCompletedH = + megaCfg->qCompletedT = + SCpnt; + } else { + megaCfg->qCompletedT-> + host_scribble = + (unsigned char *) SCpnt; + megaCfg->qCompletedT = SCpnt; + } + megaCfg->qCompletedT->host_scribble = + (unsigned char *) NULL; + megaCfg->qCcnt++; + continue; + } + + /* We don't want the ISR routine to touch M_RD_IOCTL_CMD_NEW commands, so + * don't mark them as complete, instead we pop their semaphore so + * that the queue routine can finish them off + */ + if (pScb->SCpnt->cmnd[0] == M_RD_IOCTL_CMD_NEW) { + /* save the status byte for the queue routine to use */ + pScb->SCpnt->result = qStatus; + up (&pScb->ioctl_sem); + } else { + /* Mark command as completed */ + mega_cmd_done (megaCfg, pScb, qStatus); + } + } else { + printk + ("megaraid: wrong cmd id completed from firmware:id=%x\n", + sIdx); + } + } + + mega_rundoneq (megaCfg); + + megaCfg->flag &= ~IN_ISR; + /* Loop through any pending requests */ + mega_runpendq (megaCfg); + IO_UNLOCK; + + } + +} + +/*==================================================*/ +/* Wait until the controller's mailbox is available */ +/*==================================================*/ + +static int mega_busyWaitMbox (mega_host_config * megaCfg) +{ + mega_mailbox *mbox = (mega_mailbox *) megaCfg->mbox; + long counter; + + for (counter = 0; counter < 10000; counter++) { + if (!mbox->busy) { + return 0; + } + udelay (100); + barrier (); + } + return -1; /* give up after 1 second */ +} + +/*===================================================== + * Post a command to the card + * + * Arguments: + * mega_host_config *megaCfg - Controller structure + * u_char *mboxData - Mailbox area, 16 bytes + * mega_scb *pScb - SCB posting (or NULL if N/A) + * int intr - if 1, interrupt, 0 is blocking + * Return Value: (added on 7/26 for 40ld/64bit) + * -1: the command was not actually issued out + * other cases: + * intr==0, return ScsiStatus, i.e. mbox->status + * intr==1, return 0 + *===================================================== + */ +static int megaIssueCmd (mega_host_config * megaCfg, u_char * mboxData, + mega_scb * pScb, int intr) +{ + volatile mega_mailbox *mbox = (mega_mailbox *) megaCfg->mbox; + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + volatile mega_mailbox64 *mbox64 = (mega_mailbox64 *) megaCfg->mbox64; +#endif + + u_char byte; + +#ifdef __LP64__ + u64 phys_mbox; +#else + u32 phys_mbox; +#endif + u8 retval = -1; + + mboxData[0x1] = (pScb ? pScb->idx + 1 : 0xFE); /* Set cmdid */ + mboxData[0xF] = 1; /* Set busy */ + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + /* In this case mbox contains physical address */ + phys_mbox = megaCfg->adjdmahandle64; +#else + phys_mbox = virt_to_bus (megaCfg->mbox); +#endif + +#if DEBUG + ShowMbox (pScb); +#endif + + /* Wait until mailbox is free */ + if (mega_busyWaitMbox (megaCfg)) { + printk ("Blocked mailbox......!!\n"); + udelay (1000); + +#if DEBUG + showMbox (pLastScb); +#endif + + /* Abort command */ + if (pScb == NULL) { + TRACE (("NULL pScb in megaIssue\n")); + printk ("NULL pScb in megaIssue\n"); + } + mega_cmd_done (megaCfg, pScb, 0x08); + return -1; + } + + pLastScb = pScb; + + /* Copy mailbox data into host structure */ + megaCfg->mbox64->xferSegment_lo = 0; + megaCfg->mbox64->xferSegment_hi = 0; + + memcpy ((char *) mbox, mboxData, 16); + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + switch (mboxData[0]) { + case MEGA_MBOXCMD_LREAD64: + case MEGA_MBOXCMD_LWRITE64: + mbox64->xferSegment_lo = mbox->xferaddr; + mbox64->xferSegment_hi = 0; + mbox->xferaddr = 0xFFFFFFFF; + break; + } +#endif + + /* Kick IO */ + if (intr) { + /* Issue interrupt (non-blocking) command */ + if (megaCfg->flag & BOARD_QUARTZ) { + mbox->mraid_poll = 0; + mbox->mraid_ack = 0; + + WRINDOOR (megaCfg, phys_mbox | 0x1); + } else { + ENABLE_INTR (megaCfg->host->io_port); + ISSUE_COMMAND (megaCfg->host->io_port); + } + pScb->state = SCB_ISSUED; + + retval = 0; + } else { /* Issue non-ISR (blocking) command */ + disable_irq (megaCfg->host->irq); + if (megaCfg->flag & BOARD_QUARTZ) { + mbox->mraid_poll = 0; + mbox->mraid_ack = 0; + mbox->numstatus = 0xFF; + mbox->status = 0xFF; + WRINDOOR (megaCfg, phys_mbox | 0x1); + + while (mbox->numstatus == 0xFF) ; + while (mbox->status == 0xFF) ; + while (mbox->mraid_poll != 0x77) ; + mbox->mraid_poll = 0; + mbox->mraid_ack = 0x77; + + /* while ((cmdDone = RDOUTDOOR (megaCfg)) != 0x10001234); + WROUTDOOR (megaCfg, cmdDone); */ + + if (pScb) { + mega_cmd_done (megaCfg, pScb, mbox->status); + } + + WRINDOOR (megaCfg, phys_mbox | 0x2); + while (RDINDOOR (megaCfg) & 0x2) ; + + } else { + DISABLE_INTR (megaCfg->host->io_port); + ISSUE_COMMAND (megaCfg->host->io_port); + + while (! + ((byte = + READ_PORT (megaCfg->host->io_port, + INTR_PORT)) & INTR_VALID)) ; + WRITE_PORT (megaCfg->host->io_port, INTR_PORT, byte); + + ENABLE_INTR (megaCfg->host->io_port); + CLEAR_INTR (megaCfg->host->io_port); + + if (pScb) { + mega_cmd_done (megaCfg, pScb, mbox->status); + } else { + TRACE (("Error: NULL pScb!\n")); + } + } + enable_irq (megaCfg->host->irq); + retval = mbox->status; + } +#if DEBUG + while (mega_busyWaitMbox (megaCfg)) { + printk(KERN_ERR "Blocked mailbox on exit......!\n"); + udelay (1000); + } +#endif + + return retval; +} + +/*------------------------------------------------------------------- + * Copies data to SGLIST + *-------------------------------------------------------------------*/ +/* Note: + For 64 bit cards, we need a minimum of one SG element for read/write +*/ + +static int +mega_build_sglist (mega_host_config * megaCfg, mega_scb * scb, + u32 * buffer, u32 * length) +{ + struct scatterlist *sgList; + int idx; + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + int sgcnt; +#endif + + mega_mailbox *mbox = NULL; + + mbox = (mega_mailbox *) scb->mboxData; + /* Scatter-gather not used */ + if (scb->SCpnt->use_sg == 0) { + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + scb->dma_h_bulkdata = pci_map_single (megaCfg->dev, + scb->SCpnt->request_buffer, + scb->SCpnt->request_bufflen, + scb->dma_direction); + /* We need to handle special commands like READ64, WRITE64 + as they need a minimum of 1 SG irrespective of actually SG + */ + if ((megaCfg->flag & BOARD_64BIT) && + ((mbox->cmd == MEGA_MBOXCMD_LREAD64) || + (mbox->cmd == MEGA_MBOXCMD_LWRITE64))) { + scb->sg64List[0].address = scb->dma_h_bulkdata; + scb->sg64List[0].length = scb->SCpnt->request_bufflen; + *buffer = scb->dma_sghandle64; + *length = 0; + scb->sglist_count = 1; + return 1; + } else { + *buffer = scb->dma_h_bulkdata; + *length = (u32) scb->SCpnt->request_bufflen; + } +#else + *buffer = virt_to_bus (scb->SCpnt->request_buffer); + *length = (u32) scb->SCpnt->request_bufflen; +#endif + return 0; + } + + sgList = (struct scatterlist *) scb->SCpnt->request_buffer; + + if (scb->SCpnt->use_sg == 1) { + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + scb->dma_h_bulkdata = pci_map_single (megaCfg->dev, + sgList[0].address, + sgList[0].length, scb->dma_direction); + + if ((megaCfg->flag & BOARD_64BIT) && + ((mbox->cmd == MEGA_MBOXCMD_LREAD64) || + (mbox->cmd == MEGA_MBOXCMD_LWRITE64))) { + scb->sg64List[0].address = scb->dma_h_bulkdata; + scb->sg64List[0].length = scb->SCpnt->request_bufflen; + *buffer = scb->dma_sghandle64; + *length = 0; + scb->sglist_count = 1; + return 1; + } else { + *buffer = scb->dma_h_bulkdata; + *length = (u32) sgList[0].length; + } +#else + *buffer = virt_to_bus (sgList[0].address); + *length = (u32) sgList[0].length; +#endif + + return 0; + } + + /* Copy Scatter-Gather list info into controller structure */ +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + sgcnt = pci_map_sg (megaCfg->dev, + sgList, scb->SCpnt->use_sg, scb->dma_direction); + + /* Determine the validity of the new count */ + if (sgcnt == 0) + printk ("pci_map_sg returned zero!!! "); + + for (idx = 0; idx < sgcnt; idx++, sgList++) { + + if ((megaCfg->flag & BOARD_64BIT) && + ((mbox->cmd == MEGA_MBOXCMD_LREAD64) || + (mbox->cmd == MEGA_MBOXCMD_LWRITE64))) { + scb->sg64List[idx].address = sg_dma_address (sgList); + scb->sg64List[idx].length = sg_dma_len (sgList); + } else { + scb->sgList[idx].address = sg_dma_address (sgList); + scb->sgList[idx].length = sg_dma_len (sgList); + } + + } + +#else + for (idx = 0; idx < scb->SCpnt->use_sg; idx++) { + scb->sgList[idx].address = virt_to_bus (sgList[idx].address); + scb->sgList[idx].length = (u32) sgList[idx].length; + } +#endif + + /* Reset pointer and length fields */ +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + *buffer = scb->dma_sghandle64; + scb->sglist_count = scb->SCpnt->use_sg; +#else + *buffer = virt_to_bus (scb->sgList); +#endif + *length = 0; + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + /* Return count of SG requests */ + return sgcnt; +#else + /* Return count of SG requests */ + return scb->SCpnt->use_sg; +#endif +} + +/*-------------------------------------------------------------------- + * Initializes the address of the controller's mailbox register + * The mailbox register is used to issue commands to the card. + * Format of the mailbox area: + * 00 01 command + * 01 01 command id + * 02 02 # of sectors + * 04 04 logical bus address + * 08 04 physical buffer address + * 0C 01 logical drive # + * 0D 01 length of scatter/gather list + * 0E 01 reserved + * 0F 01 mailbox busy + * 10 01 numstatus byte + * 11 01 status byte + *--------------------------------------------------------------------*/ +static int +mega_register_mailbox (mega_host_config * megaCfg, u32 paddr) +{ + /* align on 16-byte boundary */ +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + megaCfg->mbox = &megaCfg->mailbox64ptr->mailbox; +#else + megaCfg->mbox = &megaCfg->mailbox64.mailbox; +#endif + +#ifdef __LP64__ + megaCfg->mbox = (mega_mailbox *) ((((u64) megaCfg->mbox) + 16) & ((u64) (-1) ^ 0x0F)); + megaCfg->adjdmahandle64 = (megaCfg->dma_handle64 + 16) & ((u64) (-1) ^ 0x0F); + megaCfg->mbox64 = (mega_mailbox64 *) ((u_char *) megaCfg->mbox - sizeof (u64)); + paddr = (paddr + 4 + 16) & ((u64) (-1) ^ 0x0F); +#else + megaCfg->mbox + = (mega_mailbox *) ((((u32) megaCfg->mbox) + 16) & 0xFFFFFFF0); + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + megaCfg->adjdmahandle64 = ((megaCfg->dma_handle64 + 16) & 0xFFFFFFF0); +#endif + + megaCfg->mbox64 = (mega_mailbox64 *) ((u_char *) megaCfg->mbox - 8); + paddr = (paddr + 4 + 16) & 0xFFFFFFF0; +#endif + + /* Register mailbox area with the firmware */ + if (!(megaCfg->flag & BOARD_QUARTZ)) { + WRITE_PORT (megaCfg->host->io_port, MBOX_PORT0, paddr & 0xFF); + WRITE_PORT (megaCfg->host->io_port, MBOX_PORT1, + (paddr >> 8) & 0xFF); + WRITE_PORT (megaCfg->host->io_port, MBOX_PORT2, + (paddr >> 16) & 0xFF); + WRITE_PORT (megaCfg->host->io_port, MBOX_PORT3, + (paddr >> 24) & 0xFF); + WRITE_PORT (megaCfg->host->io_port, ENABLE_MBOX_REGION, + ENABLE_MBOX_BYTE); + + CLEAR_INTR (megaCfg->host->io_port); + ENABLE_INTR (megaCfg->host->io_port); + } + return 0; +} + +/*--------------------------------------------------------------------------- + * mega_Convert8ldTo40ld() -- takes all info in AdapterInquiry structure and + * puts it into ProductInfo and Enquiry3 structures for later use + *---------------------------------------------------------------------------*/ +static void mega_Convert8ldTo40ld (mega_RAIDINQ * inquiry, + mega_Enquiry3 * enquiry3, + megaRaidProductInfo * productInfo) +{ + int i; + + productInfo->MaxConcCmds = inquiry->AdpInfo.MaxConcCmds; + enquiry3->rbldRate = inquiry->AdpInfo.RbldRate; + productInfo->SCSIChanPresent = inquiry->AdpInfo.ChanPresent; + + for (i = 0; i < 4; i++) { + productInfo->FwVer[i] = inquiry->AdpInfo.FwVer[i]; + productInfo->BiosVer[i] = inquiry->AdpInfo.BiosVer[i]; + } + enquiry3->cacheFlushInterval = inquiry->AdpInfo.CacheFlushInterval; + productInfo->DramSize = inquiry->AdpInfo.DramSize; + + enquiry3->numLDrv = inquiry->LogdrvInfo.NumLDrv; + + for (i = 0; i < MAX_LOGICAL_DRIVES; i++) { + enquiry3->lDrvSize[i] = inquiry->LogdrvInfo.LDrvSize[i]; + enquiry3->lDrvProp[i] = inquiry->LogdrvInfo.LDrvProp[i]; + enquiry3->lDrvState[i] + = inquiry->LogdrvInfo.LDrvState[i]; + } + + for (i = 0; i < (MAX_PHYSICAL_DRIVES); i++) { + enquiry3->pDrvState[i] + = inquiry->PhysdrvInfo.PDrvState[i]; + } +} + +/*------------------------------------------------------------------- + * Issue an adapter info query to the controller + *-------------------------------------------------------------------*/ +static int mega_i_query_adapter (mega_host_config * megaCfg) +{ + mega_Enquiry3 *enquiry3Pnt; + mega_mailbox *mbox; + u_char mboxData[16]; + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + dma_addr_t raid_inq_dma_handle = 0, prod_info_dma_handle = 0, enquiry3_dma_handle = 0; +#endif + u8 retval; + + /* Initialize adapter inquiry mailbox */ + + mbox = (mega_mailbox *) mboxData; + + memset ((void *) megaCfg->mega_buffer, 0, + sizeof (megaCfg->mega_buffer)); + memset (mbox, 0, 16); + +/* + * Try to issue Enquiry3 command + * if not succeeded, then issue MEGA_MBOXCMD_ADAPTERINQ command and + * update enquiry3 structure + */ +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + enquiry3_dma_handle = pci_map_single (megaCfg->dev, + (void *) megaCfg->mega_buffer, + (2 * 1024L), PCI_DMA_FROMDEVICE); + + mbox->xferaddr = enquiry3_dma_handle; +#else + /*Taken care */ + mbox->xferaddr = virt_to_bus ((void *) megaCfg->mega_buffer); +#endif + + /* Initialize mailbox databuffer addr */ + enquiry3Pnt = (mega_Enquiry3 *) megaCfg->mega_buffer; + /* point mega_Enguiry3 to the data buf */ + + mboxData[0] = FC_NEW_CONFIG; /* i.e. mbox->cmd=0xA1 */ + mboxData[2] = NC_SUBOP_ENQUIRY3; /* i.e. 0x0F */ + mboxData[3] = ENQ3_GET_SOLICITED_FULL; /* i.e. 0x02 */ + + /* Issue a blocking command to the card */ + if ((retval = megaIssueCmd (megaCfg, mboxData, NULL, 0)) != 0) { /* the adapter does not support 40ld */ + mega_RAIDINQ adapterInquiryData; + mega_RAIDINQ *adapterInquiryPnt = &adapterInquiryData; + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + raid_inq_dma_handle = pci_map_single (megaCfg->dev, + (void *) adapterInquiryPnt, + sizeof (mega_RAIDINQ), + PCI_DMA_FROMDEVICE); + mbox->xferaddr = raid_inq_dma_handle; +#else + /*taken care */ + mbox->xferaddr = virt_to_bus ((void *) adapterInquiryPnt); +#endif + + mbox->cmd = MEGA_MBOXCMD_ADAPTERINQ; /*issue old 0x05 command to adapter */ + /* Issue a blocking command to the card */ ; + retval = megaIssueCmd (megaCfg, mboxData, NULL, 0); + + pci_unmap_single (megaCfg->dev, + raid_inq_dma_handle, + sizeof (mega_RAIDINQ), PCI_DMA_FROMDEVICE); + + /*update Enquiry3 and ProductInfo structures with mega_RAIDINQ structure*/ + mega_Convert8ldTo40ld (adapterInquiryPnt, + enquiry3Pnt, + (megaRaidProductInfo *) & megaCfg-> + productInfo); + + } else { /* adapter supports 40ld */ + megaCfg->flag |= BOARD_40LD; + + pci_unmap_single (megaCfg->dev, + enquiry3_dma_handle, + (2 * 1024L), PCI_DMA_FROMDEVICE); +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) +/*get productInfo, which is static information and will be unchanged*/ + prod_info_dma_handle + = pci_map_single (megaCfg->dev, + (void *) &megaCfg->productInfo, + sizeof (megaRaidProductInfo), + PCI_DMA_FROMDEVICE); + mbox->xferaddr = prod_info_dma_handle; +#else + /*taken care */ + mbox->xferaddr = virt_to_bus ((void *) &megaCfg->productInfo); +#endif + + mboxData[0] = FC_NEW_CONFIG; /* i.e. mbox->cmd=0xA1 */ + mboxData[2] = NC_SUBOP_PRODUCT_INFO; /* i.e. 0x0E */ + + if ((retval = megaIssueCmd (megaCfg, mboxData, NULL, 0)) != 0) + printk ("ami:Product_info cmd failed with error: %d\n", + retval); + + pci_unmap_single (megaCfg->dev, + prod_info_dma_handle, + sizeof (megaRaidProductInfo), + PCI_DMA_FROMDEVICE); + } + + megaCfg->host->max_channel = megaCfg->productInfo.SCSIChanPresent; + megaCfg->host->max_id = 16; /* max targets per channel */ + /*(megaCfg->flag & BOARD_40LD)?FC_MAX_TARGETS_PER_CHANNEL:MAX_TARGET+1; */ + megaCfg->host->max_lun = /* max lun */ + (megaCfg-> + flag & BOARD_40LD) ? FC_MAX_LOGICAL_DRIVES : MAX_LOGICAL_DRIVES; + megaCfg->host->cmd_per_lun = MAX_CMD_PER_LUN; + + megaCfg->numldrv = enquiry3Pnt->numLDrv; + megaCfg->max_cmds = megaCfg->productInfo.MaxConcCmds; + if (megaCfg->max_cmds > MAX_COMMANDS) + megaCfg->max_cmds = MAX_COMMANDS - 1; + + megaCfg->host->can_queue = megaCfg->max_cmds - 1; + +#if 0 + if (megaCfg->host->can_queue >= MAX_COMMANDS) { + megaCfg->host->can_queue = MAX_COMMANDS - 16; + } +#endif + + /* use HP firmware and bios version encoding */ +if (megaCfg->productInfo.subSystemVendorID == HP_SUBSYS_ID) { + sprintf (megaCfg->fwVer, "%c%d%d.%d%d", + megaCfg->productInfo.FwVer[2], + megaCfg->productInfo.FwVer[1] >> 8, + megaCfg->productInfo.FwVer[1] & 0x0f, + megaCfg->productInfo.FwVer[2] >> 8, + megaCfg->productInfo.FwVer[2] & 0x0f); + sprintf (megaCfg->biosVer, "%c%d%d.%d%d", + megaCfg->productInfo.BiosVer[2], + megaCfg->productInfo.BiosVer[1] >> 8, + megaCfg->productInfo.BiosVer[1] & 0x0f, + megaCfg->productInfo.BiosVer[2] >> 8, + megaCfg->productInfo.BiosVer[2] & 0x0f); +} else { + memcpy (megaCfg->fwVer, (char *) megaCfg->productInfo.FwVer, 4); + megaCfg->fwVer[4] = 0; + + memcpy (megaCfg->biosVer, (char *) megaCfg->productInfo.BiosVer, 4); + megaCfg->biosVer[4] = 0; +} + megaCfg->support_ext_cdb = mega_support_ext_cdb(megaCfg); + + printk (KERN_NOTICE "megaraid: [%s:%s] detected %d logical drives" M_RD_CRLFSTR, + megaCfg->fwVer, megaCfg->biosVer, megaCfg->numldrv); + + if ( megaCfg->support_ext_cdb ) { + printk(KERN_NOTICE "megaraid: supports extended CDBs.\n"); + } + + /* + * I hope that I can unmap here, reason DMA transaction is not required any more + * after this + */ + + return 0; +} + +/*------------------------------------------------------------------------- + * + * Driver interface functions + * + *-------------------------------------------------------------------------*/ + +/*---------------------------------------------------------- + * Returns data to be displayed in /proc/scsi/megaraid/X + *----------------------------------------------------------*/ + +int megaraid_proc_info (char *buffer, char **start, off_t offset, + int length, int host_no, int inout) +{ + *start = buffer; + return 0; +} + +static int mega_findCard (Scsi_Host_Template * pHostTmpl, + u16 pciVendor, u16 pciDev, long flag) +{ + mega_host_config *megaCfg = NULL; + struct Scsi_Host *host = NULL; + u_char pciBus, pciDevFun, megaIrq; + + u16 magic; +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + u32 magic64; +#endif + + int i; + +#ifdef __LP64__ + u64 megaBase; +#else + u32 megaBase; +#endif + + u16 pciIdx = 0; + u16 numFound = 0; + u16 subsysid, subsysvid; + +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,1,0) /* 0x20100 */ + while (!pcibios_find_device + (pciVendor, pciDev, pciIdx, &pciBus, &pciDevFun)) { +#else + +#if LINUX_VERSION_CODE > KERNEL_VERSION(2,3,0) /*0x20300 */ + struct pci_dev *pdev = NULL; +#else + struct pci_dev *pdev = pci_devices; +#endif + + while ((pdev = pci_find_device (pciVendor, pciDev, pdev))) { + if(pci_enable_device (pdev)) + continue; + pciBus = pdev->bus->number; + pciDevFun = pdev->devfn; +#endif + if ((flag & BOARD_QUARTZ) && (skip_id == -1)) { + pcibios_read_config_word (pciBus, pciDevFun, + PCI_CONF_AMISIG, &magic); + if ((magic != AMI_SIGNATURE) + && (magic != AMI_SIGNATURE_471)) { + pciIdx++; + continue; /* not an AMI board */ + } +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + pcibios_read_config_dword (pciBus, pciDevFun, + PCI_CONF_AMISIG64, &magic64); + + if (magic64 == AMI_64BIT_SIGNATURE) + flag |= BOARD_64BIT; +#endif + } + + /* Hmmm...Should we not make this more modularized so that in future we dont add + for each firmware */ + + if (flag & BOARD_QUARTZ) { + /* Check to see if this is a Dell PERC RAID controller model 466 */ +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,1,0) /* 0x20100 */ + pcibios_read_config_word (pciBus, pciDevFun, + PCI_SUBSYSTEM_VENDOR_ID, + &subsysvid); + pcibios_read_config_word (pciBus, pciDevFun, + PCI_SUBSYSTEM_ID, &subsysid); +#else + pci_read_config_word (pdev, + PCI_SUBSYSTEM_VENDOR_ID, + &subsysvid); + pci_read_config_word (pdev, + PCI_SUBSYSTEM_ID, &subsysid); +#endif + +#if 0 + /* + * This routine is called with well know values and we + * should not be getting what we have not asked. + * Also, the check is not right. It should have been for + * pci_vendor_id not subsysvid - AM + */ + + /* If we dont detect this valid subsystem vendor id's + we refuse to load the driver + PART of PC200X compliance + */ + + if ((subsysvid != AMI_SUBSYS_ID) + && (subsysvid != DELL_SUBSYS_ID) + && (subsysvid != HP_SUBSYS_ID)) + continue; +#endif + } + + printk (KERN_NOTICE + "megaraid: found 0x%4.04x:0x%4.04x:idx %d:bus %d:slot %d:func %d\n", + pciVendor, pciDev, pciIdx, pciBus, PCI_SLOT (pciDevFun), + PCI_FUNC (pciDevFun)); + /* Read the base port and IRQ from PCI */ +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,1,0) /* 0x20100 */ + pcibios_read_config_dword (pciBus, pciDevFun, + PCI_BASE_ADDRESS_0, + (u_int *) & megaBase); + pcibios_read_config_byte (pciBus, pciDevFun, + PCI_INTERRUPT_LINE, &megaIrq); +#elif LINUX_VERSION_CODE < KERNEL_VERSION(2,3,0) /*0x20300 */ + megaBase = pdev->base_address[0]; + megaIrq = pdev->irq; +#else + + megaBase = pci_resource_start (pdev, 0); + megaIrq = pdev->irq; +#endif + + pciIdx++; + + if (flag & BOARD_QUARTZ) { + megaBase &= PCI_BASE_ADDRESS_MEM_MASK; + megaBase = (long) ioremap (megaBase, 128); + if (!megaBase) + continue; + } else { + megaBase &= PCI_BASE_ADDRESS_IO_MASK; + megaBase += 0x10; + } + + /* Initialize SCSI Host structure */ + host = scsi_register (pHostTmpl, sizeof (mega_host_config)); + if (!host) + goto err_unmap; + + /* + * Comment the following initialization if you know 'max_sectors' is + * not defined for this kernel. + * This field was introduced in Linus's kernel 2.4.7pre3 and it + * greatly increases the IO performance - AM + */ + host->max_sectors = 1024; + + scsi_set_pci_device(host, pdev); + megaCfg = (mega_host_config *) host->hostdata; + memset (megaCfg, 0, sizeof (mega_host_config)); + + printk (KERN_NOTICE "scsi%d : Found a MegaRAID controller at 0x%x, IRQ: %d" + M_RD_CRLFSTR, host->host_no, (u_int) megaBase, megaIrq); + + if (flag & BOARD_64BIT) + printk (KERN_NOTICE "scsi%d : Enabling 64 bit support\n", + host->host_no); + + /* Copy resource info into structure */ + megaCfg->qCompletedH = NULL; + megaCfg->qCompletedT = NULL; + megaCfg->qPendingH = NULL; + megaCfg->qPendingT = NULL; + megaCfg->qFreeH = NULL; + megaCfg->qFreeT = NULL; + megaCfg->qFcnt = 0; + megaCfg->qPcnt = 0; + megaCfg->qCcnt = 0; + megaCfg->lock_free = SPIN_LOCK_UNLOCKED; + megaCfg->lock_pend = SPIN_LOCK_UNLOCKED; + megaCfg->lock_scsicmd = SPIN_LOCK_UNLOCKED; + megaCfg->flag = flag; + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + megaCfg->dev = pdev; +#endif + megaCfg->host = host; + megaCfg->base = megaBase; + megaCfg->host->irq = megaIrq; + megaCfg->host->io_port = megaBase; + megaCfg->host->n_io_port = 16; + megaCfg->host->unique_id = (pciBus << 8) | pciDevFun; + megaCtlrs[numCtlrs] = megaCfg; + + if (!(flag & BOARD_QUARTZ)) { + /* Request our IO Range */ + if (check_region (megaBase, 16)) { + printk(KERN_WARNING "megaraid: Couldn't register I/O range!\n"); + goto err_unregister; + } + request_region(megaBase, 16, "megaraid"); + } + + /* Request our IRQ */ + if (request_irq (megaIrq, megaraid_isr, SA_SHIRQ, + "megaraid", megaCfg)) { + printk (KERN_WARNING + "megaraid: Couldn't register IRQ %d!\n", + megaIrq); + goto err_release; + } + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + /* + * unmap while releasing the driver, Is it required to be + * PCI_DMA_BIDIRECTIONAL + */ + + megaCfg->mailbox64ptr + = pci_alloc_consistent (megaCfg->dev, + sizeof (mega_mailbox64), + &(megaCfg->dma_handle64)); + + mega_register_mailbox (megaCfg, + virt_to_bus ((void *) megaCfg-> + mailbox64ptr)); +#else + mega_register_mailbox (megaCfg, + virt_to_bus ((void *) &megaCfg-> + mailbox64)); +#endif + + mega_i_query_adapter (megaCfg); + + if ((subsysid == 0x1111) && (subsysvid == 0x1111)) { + + /* + * Which firmware + */ + if( strcmp(megaCfg->fwVer, "3.00") == 0 || + strcmp(megaCfg->fwVer, "3.01") == 0 ) { + + printk( KERN_WARNING + "megaraid: Your card is a Dell PERC 2/SC RAID controller " + "with firmware\nmegaraid: 3.00 or 3.01. This driver is " + "known to have corruption issues\nmegaraid: with those " + "firmware versions on this specific card. In order\n" + "megaraid: to protect your data, please upgrade your " + "firmware to version\nmegaraid: 3.10 or later, available " + "from the Dell Technical Support web\nmegaraid: site at\n" + "http://support.dell.com/us/en/filelib/download/" + "index.asp?fileid=2940\n" + ); + } + } + + /* + * If we have a HP 1M(0x60E7)/2M(0x60E8) controller with + * firmware H.01.07 or H.01.08, disable 64 bit support, + * since this firmware cannot handle 64 bit addressing + */ + + if( (subsysvid == HP_SUBSYS_ID) && + ((subsysid == 0x60E7)||(subsysid == 0x60E8)) ) { + + /* + * which firmware + */ + if( strcmp(megaCfg->fwVer, "H01.07") == 0 || + strcmp(megaCfg->fwVer, "H01.08") == 0 ) { + printk(KERN_WARNING + "megaraid: Firmware H.01.07 or H.01.08 on 1M/2M " + "controllers\nmegaraid: do not support 64 bit " + "addressing.\n" + "megaraid: DISABLING 64 bit support.\n"); + megaCfg->flag &= ~BOARD_64BIT; + } + } + + if (mega_is_bios_enabled (megaCfg)) { + mega_hbas[numCtlrs].is_bios_enabled = 1; + } + + /* + * Find out which channel is raid and which is scsi + */ + mega_enum_raid_scsi(megaCfg); + for( i = 0; i < megaCfg->host->max_channel; i++ ) { + if(IS_RAID_CH(i)) + printk(KERN_NOTICE"megaraid: channel[%d] is raid.\n", i+1); + else + printk(KERN_NOTICE"megaraid: channel[%d] is scsi.\n", i+1); + } + + /* + * Find out if a logical drive is set as the boot drive. If there is + * one, will make that as the first logical drive. + */ + mega_get_boot_ldrv(megaCfg); + + mega_hbas[numCtlrs].hostdata_addr = megaCfg; + + /* Initialize SCBs */ + if (mega_init_scb (megaCfg)) { + pci_free_consistent (megaCfg->dev, + sizeof (mega_mailbox64), + (void *) megaCfg->mailbox64ptr, + megaCfg->dma_handle64); + scsi_unregister (host); + continue; + } + + /* + * Fill in the structure which needs to be passed back to the + * application when it does an ioctl() for controller related + * information. + */ + + i = numCtlrs; + numCtlrs++; + + mcontroller[i].base = megaBase; + mcontroller[i].irq = megaIrq; + mcontroller[i].numldrv = megaCfg->numldrv; + mcontroller[i].pcibus = pciBus; + mcontroller[i].pcidev = pciDev; + mcontroller[i].pcifun = PCI_FUNC (pciDevFun); + mcontroller[i].pciid = pciIdx; + mcontroller[i].pcivendor = pciVendor; + mcontroller[i].pcislot = PCI_SLOT (pciDevFun); + mcontroller[i].uid = (pciBus << 8) | pciDevFun; + + numFound++; + + /* Set the Mode of addressing to 64 bit */ +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + if ((megaCfg->flag & BOARD_64BIT) && BITS_PER_LONG == 64) +#ifdef __LP64__ + pdev->dma_mask = 0xffffffffffffffff; +#else + pdev->dma_mask = 0xffffffff; +#endif +#endif + continue; + err_release: + if (flag & BOARD_QUARTZ) + release_region (megaBase, 16); + err_unregister: + scsi_unregister (host); + err_unmap: + if (flag & BOARD_QUARTZ) + iounmap ((void *) megaBase); + } + return numFound; +} + +/*--------------------------------------------------------- + * Detects if a megaraid controller exists in this system + *---------------------------------------------------------*/ + +int megaraid_detect (Scsi_Host_Template * pHostTmpl) +{ + int ctlridx = 0, count = 0; + +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,3,0) /*0x20300 */ + pHostTmpl->proc_dir = &proc_scsi_megaraid; +#else + pHostTmpl->proc_name = "megaraid"; +#endif + +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,1,0) /* 0x20100 */ + if (!pcibios_present ()) { + printk (KERN_WARNING "megaraid: PCI bios not present." + M_RD_CRLFSTR); + return 0; + } +#endif + skip_id = -1; + if (megaraid && !strncmp (megaraid, "skip", strlen ("skip"))) { + if (megaraid[4] != '\0') { + skip_id = megaraid[4] - '0'; + if (megaraid[5] != '\0') { + skip_id = (skip_id * 10) + (megaraid[5] - '0'); + } + } + skip_id = (skip_id > 15) ? -1 : skip_id; + } + + printk (KERN_NOTICE "megaraid: " MEGARAID_VERSION M_RD_CRLFSTR); + + memset (mega_hbas, 0, sizeof (mega_hbas)); + + count += mega_findCard (pHostTmpl, PCI_VENDOR_ID_AMI, + PCI_DEVICE_ID_AMI_MEGARAID, 0); + count += mega_findCard (pHostTmpl, PCI_VENDOR_ID_AMI, + PCI_DEVICE_ID_AMI_MEGARAID2, 0); + count += mega_findCard (pHostTmpl, 0x8086, + PCI_DEVICE_ID_AMI_MEGARAID3, BOARD_QUARTZ); + count += mega_findCard (pHostTmpl, PCI_VENDOR_ID_AMI, + PCI_DEVICE_ID_AMI_MEGARAID3, BOARD_QUARTZ); + + mega_reorder_hosts (); + +#ifdef CONFIG_PROC_FS + if (count) { +#if LINUX_VERSION_CODE > KERNEL_VERSION(2,3,0) /*0x20300 */ + mega_proc_dir_entry = proc_mkdir ("megaraid", &proc_root); +#else + mega_proc_dir_entry = create_proc_entry ("megaraid", + S_IFDIR | S_IRUGO | + S_IXUGO, &proc_root); +#endif + if (!mega_proc_dir_entry) + printk ("megaraid: failed to create megaraid root\n"); + else + for (ctlridx = 0; ctlridx < count; ctlridx++) + mega_create_proc_entry (ctlridx, + mega_proc_dir_entry); + } +#endif + + /* + * Register the driver as a character device, for applications to access + * it for ioctls. + * Ideally, this should go in the init_module() routine, but since it is + * hidden in the file "scsi_module.c" ( included in the end ), we define + * it here + * First argument (major) to register_chrdev implies a dynamic major + * number allocation. + */ + major = register_chrdev (0, "megadev", &megadev_fops); + + /* + * Register the Shutdown Notification hook in kernel + */ + if (register_reboot_notifier (&mega_notifier)) { + printk ("MegaRAID Shutdown routine not registered!!\n"); + } + + init_MUTEX (&mimd_entry_mtx); + + return count; +} + +/*--------------------------------------------------------------------- + * Release the controller's resources + *---------------------------------------------------------------------*/ +int megaraid_release (struct Scsi_Host *pSHost) +{ + mega_host_config *megaCfg; + mega_mailbox *mbox; + u_char mboxData[16]; + int i; + + megaCfg = (mega_host_config *) pSHost->hostdata; + mbox = (mega_mailbox *) mboxData; + + /* Flush cache to disk */ + memset (mbox, 0, 16); + mboxData[0] = 0xA; + + free_irq (megaCfg->host->irq, megaCfg); /* Must be freed first, otherwise + extra interrupt is generated */ + + /* Issue a blocking (interrupts disabled) command to the card */ + megaIssueCmd (megaCfg, mboxData, NULL, 0); + + /* Free our resources */ + if (megaCfg->flag & BOARD_QUARTZ) { + iounmap ((void *) megaCfg->base); + } else { + release_region (megaCfg->host->io_port, 16); + } + + mega_freeSgList (megaCfg); + pci_free_consistent (megaCfg->dev, + sizeof (mega_mailbox64), + (void *) megaCfg->mailbox64ptr, + megaCfg->dma_handle64); + +#ifdef CONFIG_PROC_FS + if (megaCfg->controller_proc_dir_entry) { + remove_proc_entry ("stat", megaCfg->controller_proc_dir_entry); + remove_proc_entry ("status", + megaCfg->controller_proc_dir_entry); + remove_proc_entry ("config", + megaCfg->controller_proc_dir_entry); + remove_proc_entry ("mailbox", + megaCfg->controller_proc_dir_entry); + for (i = 0; i < numCtlrs; i++) { + char buf[12] = { 0 }; + sprintf (buf, "%d", i); + remove_proc_entry (buf, mega_proc_dir_entry); + } + remove_proc_entry ("megaraid", &proc_root); + } +#endif + + /* + * Release the controller memory. A word of warning this frees + * hostdata and that includes megaCfg-> so be careful what you + * dereference beyond this point + */ + + scsi_unregister (pSHost); + + /* + * Unregister the character device interface to the driver. Ideally this + * should have been done in cleanup_module routine. Since this is hidden + * in file "scsi_module.c", we do it here. + * major is the major number of the character device returned by call to + * register_chrdev() routine. + */ + + unregister_chrdev (major, "megadev"); + unregister_reboot_notifier (&mega_notifier); + + return 0; +} + +static int mega_is_bios_enabled (mega_host_config * megacfg) +{ + mega_mailbox *mboxpnt; + unsigned char mbox[16]; + int ret; + + mboxpnt = (mega_mailbox *) mbox; + + memset (mbox, 0, sizeof (mbox)); + memset ((void *) megacfg->mega_buffer, + 0, sizeof (megacfg->mega_buffer)); + + /* + * issue command to find out if the BIOS is enabled for this controller + */ + mbox[0] = IS_BIOS_ENABLED; + mbox[2] = GET_BIOS; + + mboxpnt->xferaddr = virt_to_bus ((void *) megacfg->mega_buffer); + + ret = megaIssueCmd (megacfg, mbox, NULL, 0); + + return (*(char *) megacfg->mega_buffer); +} + +/* + * Find out what channels are RAID/SCSI + */ +void +mega_enum_raid_scsi(mega_host_config *megacfg) +{ + mega_mailbox *mboxp; + unsigned char mbox[16]; + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + dma_addr_t dma_handle; +#endif + + mboxp = (mega_mailbox *)mbox; + + memset(mbox, 0, sizeof(mbox)); + /* + * issue command to find out what channels are raid/scsi + */ + mbox[0] = CHNL_CLASS; + mbox[2] = GET_CHNL_CLASS; + + memset((void *)megacfg->mega_buffer, 0, sizeof(megacfg->mega_buffer)); + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + dma_handle = pci_map_single(megacfg->dev, (void *)megacfg->mega_buffer, + (2 * 1024L), PCI_DMA_FROMDEVICE); + + mboxp->xferaddr = dma_handle; +#else + mboxp->xferaddr = virt_to_bus((void *)megacfg->mega_buffer); +#endif + + /* + * Non-ROMB firware fail this command, so all channels + * must be shown RAID + */ + if( megaIssueCmd(megacfg, mbox, NULL, 0) == 0 ) { + mega_ch_class = *((char *)megacfg->mega_buffer); + + /* logical drives channel is RAID */ + mega_ch_class |= (0x01 << megacfg->host->max_channel); + } + else { + mega_ch_class = 0xFF; + } + + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + pci_unmap_single(megacfg->dev, dma_handle, + (2 * 1024L), PCI_DMA_FROMDEVICE); +#endif + +} + + +/* + * get the boot logical drive number if enabled + */ +void +mega_get_boot_ldrv(mega_host_config *megacfg) +{ + mega_mailbox *mboxp; + unsigned char mbox[16]; + struct private_bios_data *prv_bios_data; + u16 cksum = 0; + char *cksum_p; + int i; + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + dma_addr_t dma_handle; +#endif + + mboxp = (mega_mailbox *)mbox; + + memset(mbox, 0, sizeof(mbox)); + + mbox[0] = BIOS_PVT_DATA; + mbox[2] = GET_BIOS_PVT_DATA; + + memset((void *)megacfg->mega_buffer, 0, sizeof(megacfg->mega_buffer)); + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + dma_handle = pci_map_single(megacfg->dev, (void *)megacfg->mega_buffer, + (2 * 1024L), PCI_DMA_FROMDEVICE); + + mboxp->xferaddr = dma_handle; +#else + mboxp->xferaddr = virt_to_bus((void *)megacfg->mega_buffer); +#endif + + megacfg->boot_ldrv_enabled = 0; + megacfg->boot_ldrv = 0; + if( megaIssueCmd(megacfg, mbox, NULL, 0) == 0 ) { + + prv_bios_data = (struct private_bios_data *)megacfg->mega_buffer; + + cksum = 0; + cksum_p = (char *)prv_bios_data; + for( i = 0; i < 14; i++ ) { + cksum += (u16)(*cksum_p++); + } + + if( prv_bios_data->cksum == (u16)(0-cksum) ) { + megacfg->boot_ldrv_enabled = 1; + megacfg->boot_ldrv = prv_bios_data->boot_ldrv; + } + } + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + pci_unmap_single(megacfg->dev, dma_handle, + (2 * 1024L), PCI_DMA_FROMDEVICE); +#endif + +} + + +static void mega_reorder_hosts (void) +{ + struct Scsi_Host *shpnt; + struct Scsi_Host *shone; + struct Scsi_Host *shtwo; + mega_host_config *boot_host; + int i; + + /* + * Find the (first) host which has it's BIOS enabled + */ + boot_host = NULL; + for (i = 0; i < MAX_CONTROLLERS; i++) { + if (mega_hbas[i].is_bios_enabled) { + boot_host = mega_hbas[i].hostdata_addr; + break; + } + } + + if (boot_host == NULL) { + printk (KERN_WARNING "megaraid: no BIOS enabled.\n"); + return; + } + + /* + * Traverse through the list of SCSI hosts for our HBA locations + */ + shone = shtwo = NULL; + for (shpnt = scsi_hostlist; shpnt; shpnt = shpnt->next) { + /* Is it one of ours? */ + for (i = 0; i < MAX_CONTROLLERS; i++) { + if ((mega_host_config *) shpnt->hostdata == + mega_hbas[i].hostdata_addr) { + /* Does this one has BIOS enabled */ + if (mega_hbas[i].hostdata_addr == boot_host) { + + /* Are we first */ + if (shtwo == NULL) /* Yes! */ + return; + else { /* :-( */ + shone = shpnt; + } + } else { + if (!shtwo) { + /* were we here before? xchng first */ + shtwo = shpnt; + } + } + break; + } + } + /* + * Have we got the boot host and one which does not have the bios + * enabled. + */ + if (shone && shtwo) + break; + } + if (shone && shtwo) { + mega_swap_hosts (shone, shtwo); + } + + return; +} + +static void mega_swap_hosts (struct Scsi_Host *shone, struct Scsi_Host *shtwo) +{ + struct Scsi_Host *prevtoshtwo; + struct Scsi_Host *prevtoshone; + struct Scsi_Host *save = NULL;; + + /* Are these two nodes adjacent */ + if (shtwo->next == shone) { + + if (shtwo == scsi_hostlist && shone->next == NULL) { + + /* just two nodes */ + scsi_hostlist = shone; + shone->next = shtwo; + shtwo->next = NULL; + } else if (shtwo == scsi_hostlist) { + /* first two nodes of the list */ + + scsi_hostlist = shone; + shtwo->next = shone->next; + scsi_hostlist->next = shtwo; + } else if (shone->next == NULL) { + /* last two nodes of the list */ + + prevtoshtwo = scsi_hostlist; + + while (prevtoshtwo->next != shtwo) + prevtoshtwo = prevtoshtwo->next; + + prevtoshtwo->next = shone; + shone->next = shtwo; + shtwo->next = NULL; + } else { + prevtoshtwo = scsi_hostlist; + + while (prevtoshtwo->next != shtwo) + prevtoshtwo = prevtoshtwo->next; + + prevtoshtwo->next = shone; + shtwo->next = shone->next; + shone->next = shtwo; + } + + } else if (shtwo == scsi_hostlist && shone->next == NULL) { + /* shtwo at head, shone at tail, not adjacent */ + + prevtoshone = scsi_hostlist; + + while (prevtoshone->next != shone) + prevtoshone = prevtoshone->next; + + scsi_hostlist = shone; + shone->next = shtwo->next; + prevtoshone->next = shtwo; + shtwo->next = NULL; + } else if (shtwo == scsi_hostlist && shone->next != NULL) { + /* shtwo at head, shone is not at tail */ + + prevtoshone = scsi_hostlist; + while (prevtoshone->next != shone) + prevtoshone = prevtoshone->next; + + scsi_hostlist = shone; + prevtoshone->next = shtwo; + save = shtwo->next; + shtwo->next = shone->next; + shone->next = save; + } else if (shone->next == NULL) { + /* shtwo not at head, shone at tail */ + + prevtoshtwo = scsi_hostlist; + prevtoshone = scsi_hostlist; + + while (prevtoshtwo->next != shtwo) + prevtoshtwo = prevtoshtwo->next; + while (prevtoshone->next != shone) + prevtoshone = prevtoshone->next; + + prevtoshtwo->next = shone; + shone->next = shtwo->next; + prevtoshone->next = shtwo; + shtwo->next = NULL; + + } else { + prevtoshtwo = scsi_hostlist; + prevtoshone = scsi_hostlist; + save = NULL;; + + while (prevtoshtwo->next != shtwo) + prevtoshtwo = prevtoshtwo->next; + while (prevtoshone->next != shone) + prevtoshone = prevtoshone->next; + + prevtoshtwo->next = shone; + save = shone->next; + shone->next = shtwo->next; + prevtoshone->next = shtwo; + shtwo->next = save; + } + return; +} + +static inline void mega_freeSgList (mega_host_config * megaCfg) +{ + int i; + + for (i = 0; i < megaCfg->max_cmds; i++) { + if (megaCfg->scbList[i].sgList) + pci_free_consistent (megaCfg->dev, + sizeof (mega_64sglist) * + MAX_SGLIST, + megaCfg->scbList[i].sgList, + megaCfg->scbList[i]. + dma_sghandle64); +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,4,0) /* 0x020400 */ + kfree (megaCfg->scbList[i].sgList); /* free sgList */ +#endif + } +} + +/*---------------------------------------------- + * Get information about the card/driver + *----------------------------------------------*/ +const char *megaraid_info (struct Scsi_Host *pSHost) +{ + static char buffer[512]; + mega_host_config *megaCfg; + + megaCfg = (mega_host_config *) pSHost->hostdata; + + sprintf (buffer, + "AMI MegaRAID %s %d commands %d targs %d chans %d luns", + megaCfg->fwVer, megaCfg->productInfo.MaxConcCmds, + megaCfg->host->max_id, megaCfg->host->max_channel, + megaCfg->host->max_lun); + return buffer; +} + +/*----------------------------------------------------------------- + * Perform a SCSI command + * Mailbox area: + * 00 01 command + * 01 01 command id + * 02 02 # of sectors + * 04 04 logical bus address + * 08 04 physical buffer address + * 0C 01 logical drive # + * 0D 01 length of scatter/gather list + * 0E 01 reserved + * 0F 01 mailbox busy + * 10 01 numstatus byte + * 11 01 status byte + *-----------------------------------------------------------------*/ +int megaraid_queue (Scsi_Cmnd * SCpnt, void (*pktComp) (Scsi_Cmnd *)) +{ + DRIVER_LOCK_T mega_host_config * megaCfg; + mega_scb *pScb; + char *user_area = NULL; + + megaCfg = (mega_host_config *) SCpnt->host->hostdata; + DRIVER_LOCK (megaCfg); + + if (!(megaCfg->flag & (1L << SCpnt->channel))) { + if (SCpnt->channel < SCpnt->host->max_channel) + printk ( KERN_NOTICE + "scsi%d: scanning channel %c for devices.\n", + megaCfg->host->host_no, SCpnt->channel + '1'); + else + printk ( KERN_NOTICE + "scsi%d: scanning virtual channel for logical drives.\n", + megaCfg->host->host_no); + + megaCfg->flag |= (1L << SCpnt->channel); + } + + SCpnt->scsi_done = pktComp; + + if (mega_driver_ioctl (megaCfg, SCpnt)) + return 0; + + /* If driver in abort or reset.. cancel this command */ + if (megaCfg->flag & IN_ABORT) { + SCpnt->result = (DID_ABORT << 16); + /* Add Scsi_Command to end of completed queue */ + if (megaCfg->qCompletedH == NULL) { + megaCfg->qCompletedH = megaCfg->qCompletedT = SCpnt; + } else { + megaCfg->qCompletedT->host_scribble = + (unsigned char *) SCpnt; + megaCfg->qCompletedT = SCpnt; + } + megaCfg->qCompletedT->host_scribble = (unsigned char *) NULL; + megaCfg->qCcnt++; + + DRIVER_UNLOCK (megaCfg); + return 0; + } else if (megaCfg->flag & IN_RESET) { + SCpnt->result = (DID_RESET << 16); + /* Add Scsi_Command to end of completed queue */ + if (megaCfg->qCompletedH == NULL) { + megaCfg->qCompletedH = megaCfg->qCompletedT = SCpnt; + } else { + megaCfg->qCompletedT->host_scribble = + (unsigned char *) SCpnt; + megaCfg->qCompletedT = SCpnt; + } + megaCfg->qCompletedT->host_scribble = (unsigned char *) NULL; + megaCfg->qCcnt++; + + DRIVER_UNLOCK (megaCfg); + return 0; + } + + megaCfg->flag |= IN_QUEUE; + /* Allocate and build a SCB request */ + if ((pScb = mega_build_cmd (megaCfg, SCpnt)) != NULL) { + /*build SCpnt for M_RD_IOCTL_CMD_NEW cmd in mega_ioctl() */ + /* Add SCB to the head of the pending queue */ + /* Add SCB to the head of the pending queue */ + if (megaCfg->qPendingH == NULL) { + megaCfg->qPendingH = megaCfg->qPendingT = pScb; + } else { + megaCfg->qPendingT->next = pScb; + megaCfg->qPendingT = pScb; + } + megaCfg->qPendingT->next = NULL; + megaCfg->qPcnt++; + + if (mega_runpendq (megaCfg) == -1) { + DRIVER_UNLOCK (megaCfg); + return 0; + } + + if (pScb->SCpnt->cmnd[0] == M_RD_IOCTL_CMD_NEW) { + init_MUTEX_LOCKED (&pScb->ioctl_sem); + spin_unlock_irq (&io_request_lock); + down (&pScb->ioctl_sem); + user_area = (char *)*((u32*)&pScb->SCpnt->cmnd[4]); + if (copy_to_user + (user_area, pScb->buff_ptr, pScb->iDataSize)) { + printk + ("megaraid: Error copying ioctl return value to user buffer.\n"); + pScb->SCpnt->result = (DID_ERROR << 16); + } + spin_lock_irq (&io_request_lock); + DRIVER_LOCK (megaCfg); + kfree (pScb->buff_ptr); + pScb->buff_ptr = NULL; + mega_cmd_done (megaCfg, pScb, pScb->SCpnt->result); + mega_rundoneq (megaCfg); + mega_runpendq (megaCfg); + DRIVER_UNLOCK (megaCfg); + } + + megaCfg->flag &= ~IN_QUEUE; + + } + + DRIVER_UNLOCK (megaCfg); + return 0; +} + +/*---------------------------------------------------------------------- + * Issue a blocking command to the controller + *----------------------------------------------------------------------*/ +volatile static int internal_done_flag = 0; +volatile static int internal_done_errcode = 0; + +static DECLARE_WAIT_QUEUE_HEAD (internal_wait); + +static void internal_done (Scsi_Cmnd * SCpnt) +{ + internal_done_errcode = SCpnt->result; + internal_done_flag++; + wake_up (&internal_wait); +} + +/* shouldn't be used, but included for completeness */ + +int megaraid_command (Scsi_Cmnd * SCpnt) +{ + internal_done_flag = 0; + + /* Queue command, and wait until it has completed */ + megaraid_queue (SCpnt, internal_done); + + while (!internal_done_flag) { + interruptible_sleep_on (&internal_wait); + } + + return internal_done_errcode; +} + +/*--------------------------------------------------------------------- + * Abort a previous SCSI request + *---------------------------------------------------------------------*/ +int megaraid_abort (Scsi_Cmnd * SCpnt) +{ + mega_host_config *megaCfg; + int rc; /*, idx; */ + mega_scb *pScb; + + rc = SCSI_ABORT_NOT_RUNNING; + + megaCfg = (mega_host_config *) SCpnt->host->hostdata; + + megaCfg->flag |= IN_ABORT; + + for (pScb = megaCfg->qPendingH; pScb; pScb = pScb->next) { + if (pScb->SCpnt == SCpnt) { + /* Found an aborting command */ +#if DEBUG + showMbox (pScb); +#endif + + /* + * If the command is queued to be issued to the firmware, abort the scsi cmd, + * If the command is already aborted in a previous call to the _abort entry + * point, return SCSI_ABORT_SNOOZE, suggesting a reset. + * If the command is issued to the firmware, which might complete after + * some time, we will mark the scb as aborted, and return to the mid layer, + * that abort could not be done. + * In the ISR, when this command actually completes, we will perform a normal + * completion. + * + * Oct 27, 1999 + */ + + switch (pScb->state) { + case SCB_ABORTED: /* Already aborted */ + rc = SCSI_ABORT_SNOOZE; + break; + case SCB_ISSUED: /* Waiting on ISR result */ + rc = SCSI_ABORT_NOT_RUNNING; + pScb->state = SCB_ABORTED; + break; + case SCB_ACTIVE: /* still on the pending queue */ + mega_freeSCB (megaCfg, pScb); + SCpnt->result = (DID_ABORT << 16); + if (megaCfg->qCompletedH == NULL) { + megaCfg->qCompletedH = + megaCfg->qCompletedT = SCpnt; + } else { + megaCfg->qCompletedT->host_scribble = + (unsigned char *) SCpnt; + megaCfg->qCompletedT = SCpnt; + } + megaCfg->qCompletedT->host_scribble = + (unsigned char *) NULL; + megaCfg->qCcnt++; + rc = SCSI_ABORT_SUCCESS; + break; + default: + printk + ("megaraid_abort: unknown command state!!\n"); + rc = SCSI_ABORT_NOT_RUNNING; + break; + } + break; + } + } + + megaCfg->flag &= ~IN_ABORT; + +#if DEBUG + if (megaCfg->flag & IN_QUEUE) + printk ("ma:flag is in queue\n"); + if (megaCfg->qCompletedH == NULL) + printk ("ma:qchead == null\n"); +#endif + + /* + * This is required here to complete any completed requests to be communicated + * over to the mid layer. + * Calling just mega_rundoneq() did not work. + */ + if (megaCfg->qCompletedH) { + SCpnt = megaCfg->qCompletedH; + megaCfg->qCompletedH = (Scsi_Cmnd *) SCpnt->host_scribble; + megaCfg->qCcnt--; + + SCpnt->host_scribble = (unsigned char *) NULL; + /* Callback */ + callDone (SCpnt); + } + mega_rundoneq (megaCfg); + + return rc; +} + +/*--------------------------------------------------------------------- + * Reset a previous SCSI request + *---------------------------------------------------------------------*/ + +int megaraid_reset (Scsi_Cmnd * SCpnt, unsigned int rstflags) +{ + mega_host_config *megaCfg; + int idx; + int rc; + mega_scb *pScb; + + rc = SCSI_RESET_NOT_RUNNING; + megaCfg = (mega_host_config *) SCpnt->host->hostdata; + + megaCfg->flag |= IN_RESET; + + printk + ("megaraid_RESET: %.08lx cmd=%.02x , flag = %x\n", + SCpnt->serial_number, SCpnt->cmnd[0], SCpnt->channel, + SCpnt->target, SCpnt->lun, rstflags); + + TRACE (("RESET: %.08lx %.02x <%d.%d.%d>\n", + SCpnt->serial_number, SCpnt->cmnd[0], SCpnt->channel, + SCpnt->target, SCpnt->lun)); + + /* + * Walk list of SCBs for any that are still outstanding + */ + for (idx = 0; idx < megaCfg->max_cmds; idx++) { + if (megaCfg->scbList[idx].state != SCB_FREE) { + SCpnt = megaCfg->scbList[idx].SCpnt; + pScb = &megaCfg->scbList[idx]; + if (SCpnt != NULL) { + pScb->state = SCB_RESET; + break; + } + } + } + + megaCfg->flag &= ~IN_RESET; + + mega_rundoneq (megaCfg); + return rc; +} + +#ifdef CONFIG_PROC_FS +/* Following code handles /proc fs */ +static int proc_printf (mega_host_config * megaCfg, const char *fmt, ...) +{ + va_list args; + int i; + + if (megaCfg->procidx > PROCBUFSIZE) + return 0; + + va_start (args, fmt); + i = vsprintf ((megaCfg->procbuf + megaCfg->procidx), fmt, args); + va_end (args); + + megaCfg->procidx += i; + return i; +} + +static int proc_read_config (char *page, char **start, off_t offset, + int count, int *eof, void *data) +{ + + mega_host_config *megaCfg = (mega_host_config *) data; + + *start = page; + + if (megaCfg->productInfo.ProductName[0] != 0) + proc_printf (megaCfg, "%s\n", megaCfg->productInfo.ProductName); + + proc_printf (megaCfg, "Controller Type: "); + + if (megaCfg->flag & BOARD_QUARTZ) + proc_printf (megaCfg, "438/466/467/471/493\n"); + else + proc_printf (megaCfg, "418/428/434\n"); + + if (megaCfg->flag & BOARD_40LD) + proc_printf (megaCfg, + "Controller Supports 40 Logical Drives\n"); + + if (megaCfg->flag & BOARD_64BIT) + proc_printf (megaCfg, + "Controller / Driver uses 64 bit memory addressing\n"); + + proc_printf (megaCfg, "Base = %08x, Irq = %d, ", megaCfg->base, + megaCfg->host->irq); + + proc_printf (megaCfg, "Logical Drives = %d, Channels = %d\n", + megaCfg->numldrv, megaCfg->productInfo.SCSIChanPresent); + + proc_printf (megaCfg, "Version =%s:%s, DRAM = %dMb\n", + megaCfg->fwVer, megaCfg->biosVer, + megaCfg->productInfo.DramSize); + + proc_printf (megaCfg, + "Controller Queue Depth = %d, Driver Queue Depth = %d\n", + megaCfg->productInfo.MaxConcCmds, megaCfg->max_cmds); + COPY_BACK; + return count; +} + +static int proc_read_stat (char *page, char **start, off_t offset, + int count, int *eof, void *data) +{ + int i; + mega_host_config *megaCfg = (mega_host_config *) data; + + *start = page; + + proc_printf (megaCfg, "Statistical Information for this controller\n"); + proc_printf (megaCfg, "Interrupts Collected = %lu\n", + megaCfg->nInterrupts); + + for (i = 0; i < megaCfg->numldrv; i++) { + proc_printf (megaCfg, "Logical Drive %d:\n", i); + + proc_printf (megaCfg, + "\tReads Issued = %lu, Writes Issued = %lu\n", + megaCfg->nReads[i], megaCfg->nWrites[i]); + + proc_printf (megaCfg, + "\tSectors Read = %lu, Sectors Written = %lu\n\n", + megaCfg->nReadBlocks[i], megaCfg->nWriteBlocks[i]); + + } + + COPY_BACK; + return count; +} + +static int proc_read_status (char *page, char **start, off_t offset, + int count, int *eof, void *data) +{ + mega_host_config *megaCfg = (mega_host_config *) data; + *start = page; + + proc_printf (megaCfg, "TBD\n"); + COPY_BACK; + return count; +} + +static int proc_read_mbox (char *page, char **start, off_t offset, + int count, int *eof, void *data) +{ + + mega_host_config *megaCfg = (mega_host_config *) data; + volatile mega_mailbox *mbox = megaCfg->mbox; + + *start = page; + + proc_printf (megaCfg, "Contents of Mail Box Structure\n"); + proc_printf (megaCfg, " Fw Command = 0x%02x\n", mbox->cmd); + proc_printf (megaCfg, " Cmd Sequence = 0x%02x\n", mbox->cmdid); + proc_printf (megaCfg, " No of Sectors= %04d\n", mbox->numsectors); + proc_printf (megaCfg, " LBA = 0x%02x\n", mbox->lba); + proc_printf (megaCfg, " DTA = 0x%08x\n", mbox->xferaddr); + proc_printf (megaCfg, " Logical Drive= 0x%02x\n", mbox->logdrv); + proc_printf (megaCfg, " No of SG Elmt= 0x%02x\n", mbox->numsgelements); + proc_printf (megaCfg, " Busy = %01x\n", mbox->busy); + proc_printf (megaCfg, " Status = 0x%02x\n", mbox->status); + + /* proc_printf(megaCfg, "Dump of MailBox\n"); + for (i = 0; i < 16; i++) + proc_printf(megaCfg, "%02x ",*(mbox + i)); + + proc_printf(megaCfg, "\n\nNumber of Status = %02d\n",mbox->numstatus); + + for (i = 0; i < 46; i++) { + proc_printf(megaCfg,"%02d ",*(mbox + 16 + i)); + if (i%16) + proc_printf(megaCfg,"\n"); + } + + if (!mbox->numsgelements) { + dta = phys_to_virt(mbox->xferaddr); + for (i = 0; i < mbox->numsgelements; i++) + if (dta) { + proc_printf(megaCfg,"Addr = %08x\n", (ulong)*(dta + i)); proc_printf(megaCfg,"Length = %08x\n", + (ulong)*(dta + i + 4)); + } + }*/ + COPY_BACK; + return count; +} + +#if LINUX_VERSION_CODE > KERNEL_VERSION(2,3,0) /*0x20300 */ +#define CREATE_READ_PROC(string, fxn) create_proc_read_entry(string, \ + S_IRUSR | S_IFREG,\ + controller_proc_dir_entry,\ + fxn, megaCfg) +#else +#define CREATE_READ_PROC(string, fxn) create_proc_read_entry(string,S_IRUSR | S_IFREG, controller_proc_dir_entry, fxn, megaCfg) + +static struct proc_dir_entry * +create_proc_read_entry (const char *string, + int mode, + struct proc_dir_entry *parent, + read_proc_t * fxn, mega_host_config * megaCfg) +{ + struct proc_dir_entry *temp = NULL; + + temp = kmalloc (sizeof (struct proc_dir_entry), GFP_KERNEL); + if (!temp) + return NULL; + memset (temp, 0, sizeof (struct proc_dir_entry)); + + if ((temp->name = kmalloc (strlen (string) + 1, GFP_KERNEL)) == NULL) { + kfree (temp); + return NULL; + } + + strcpy ((char *) temp->name, string); + temp->namelen = strlen (string); + temp->mode = mode; /*S_IFREG | S_IRUSR */ ; + temp->data = (void *) megaCfg; + temp->read_proc = fxn; + proc_register (parent, temp); + return temp; +} +#endif + +static void mega_create_proc_entry (int index, struct proc_dir_entry *parent) +{ + u_char string[64] = { 0 }; + mega_host_config *megaCfg = megaCtlrs[index]; + struct proc_dir_entry *controller_proc_dir_entry = NULL; + + sprintf (string, "%d", index); + +#if LINUX_VERSION_CODE > KERNEL_VERSION(2,3,0) /*0x20300 */ + controller_proc_dir_entry = + megaCfg->controller_proc_dir_entry = proc_mkdir (string, parent); +#else + controller_proc_dir_entry = + megaCfg->controller_proc_dir_entry = + create_proc_entry (string, S_IFDIR | S_IRUGO | S_IXUGO, parent); +#endif + + if (!controller_proc_dir_entry) + printk ("\nmegaraid: proc_mkdir failed\n"); + else { + megaCfg->proc_read = + CREATE_READ_PROC ("config", proc_read_config); + megaCfg->proc_status = + CREATE_READ_PROC ("status", proc_read_status); + megaCfg->proc_stat = CREATE_READ_PROC ("stat", proc_read_stat); + megaCfg->proc_mbox = + CREATE_READ_PROC ("mailbox", proc_read_mbox); + } + +} +#endif /* CONFIG_PROC_FS */ + +/*------------------------------------------------------------- + * Return the disk geometry for a particular disk + * Input: + * Disk *disk - Disk geometry + * kdev_t dev - Device node + * int *geom - Returns geometry fields + * geom[0] = heads + * geom[1] = sectors + * geom[2] = cylinders + *-------------------------------------------------------------*/ +int megaraid_biosparam (Disk * disk, kdev_t dev, int *geom) +{ + int heads, sectors, cylinders; + mega_host_config *megaCfg; + + /* Get pointer to host config structure */ + megaCfg = (mega_host_config *) disk->device->host->hostdata; + + if( IS_RAID_CH(disk->device->channel)) { + /* Default heads (64) & sectors (32) */ + heads = 64; + sectors = 32; + cylinders = disk->capacity / (heads * sectors); + + /* Handle extended translation size for logical drives > 1Gb */ + if (disk->capacity >= 0x200000) { + heads = 255; + sectors = 63; + cylinders = disk->capacity / (heads * sectors); + } + + /* return result */ + geom[0] = heads; + geom[1] = sectors; + geom[2] = cylinders; + } + else { + if( mega_partsize(disk, dev, geom) == 0 ) return 0; + + printk(KERN_WARNING + "megaraid: invalid partition on this disk on channel %d\n", + disk->device->channel); + + /* Default heads (64) & sectors (32) */ + heads = 64; + sectors = 32; + cylinders = disk->capacity / (heads * sectors); + + /* Handle extended translation size for logical drives > 1Gb */ + if (disk->capacity >= 0x200000) { + heads = 255; + sectors = 63; + cylinders = disk->capacity / (heads * sectors); + } + + /* return result */ + geom[0] = heads; + geom[1] = sectors; + geom[2] = cylinders; + } + + return 0; +} + +/* + * Function : static int mega_partsize(Disk * disk, kdev_t dev, int *geom) + * + * Purpose : to determine the BIOS mapping used to create the partition + * table, storing the results (cyls, hds, and secs) in geom + * + * Note: Code is picked from scsicam.h + * + * Returns : -1 on failure, 0 on success. + */ +static int +mega_partsize(Disk * disk, kdev_t dev, int *geom) +{ + struct buffer_head *bh; + struct partition *p, *largest = NULL; + int i, largest_cyl; + int heads, cyls, sectors; + int capacity = disk->capacity; + + int ma = MAJOR(dev); + int mi = (MINOR(dev) & ~0xf); + + int block = 1024; + + if(blksize_size[ma]) + block = blksize_size[ma][mi]; + + if(!(bh = bread(MKDEV(ma,mi), 0, block))) + return -1; + + if( *(unsigned short *)(bh->b_data + 510) == 0xAA55 ) { + for( largest_cyl = -1, p = (struct partition *)(0x1BE + bh->b_data), + i = 0; i < 4; ++i, ++p) { + + if (!p->sys_ind) continue; + + cyls = p->end_cyl + ((p->end_sector & 0xc0) << 2); + + if(cyls >= largest_cyl) { + largest_cyl = cyls; + largest = p; + } + } + } + if (largest) { + heads = largest->end_head + 1; + sectors = largest->end_sector & 0x3f; + + if (heads == 0 || sectors == 0) { + brelse(bh); + return -1; + } + + cyls = capacity/(heads * sectors); + + geom[0] = heads; + geom[1] = sectors; + geom[2] = cyls; + + brelse(bh); + return 0; + } + + brelse(bh); + return -1; +} + + +/* + * This routine will be called when the use has done a forced shutdown on the + * system. Flush the Adapter cache, that's the most we can do. + */ +static int megaraid_reboot_notify (struct notifier_block *this, unsigned long code, + void *unused) +{ + struct Scsi_Host *pSHost; + mega_host_config *megaCfg; + mega_mailbox *mbox; + u_char mboxData[16]; + int i; + + if (code == SYS_DOWN || code == SYS_HALT) { + for (i = 0; i < numCtlrs; i++) { + pSHost = megaCtlrs[i]->host; + + megaCfg = (mega_host_config *) pSHost->hostdata; + mbox = (mega_mailbox *) mboxData; + + /* Flush cache to disk */ + memset (mbox, 0, 16); + mboxData[0] = 0xA; + + /* + * Free irq, otherwise extra interrupt is generated + */ + free_irq (megaCfg->host->irq, megaCfg); + + /* + * Issue a blocking (interrupts disabled) command to + * the card + */ + megaIssueCmd (megaCfg, mboxData, NULL, 0); + } + } + return NOTIFY_DONE; +} + +static int mega_init_scb (mega_host_config * megacfg) +{ + int idx; + +#if DEBUG + if (megacfg->max_cmds >= MAX_COMMANDS) { + printk ("megaraid:ctlr max cmds = %x : MAX_CMDS = %x", + megacfg->max_cmds, MAX_COMMANDS); + } +#endif + + for (idx = megacfg->max_cmds - 1; idx >= 0; idx--) { + + megacfg->scbList[idx].idx = idx; + + /* + * ISR will make this flag zero to indicate the command has been + * completed. This is only for user ioctl calls. Rest of the driver + * and the mid-layer operations are not connected with this flag. + */ + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + megacfg->scbList[idx].sgList = + pci_alloc_consistent (megacfg->dev, + sizeof (mega_64sglist) * MAX_SGLIST, + &(megacfg->scbList[idx]. + dma_sghandle64)); + + megacfg->scbList[idx].sg64List = + (mega_64sglist *) megacfg->scbList[idx].sgList; +#else + megacfg->scbList[idx].sgList = kmalloc (sizeof (mega_sglist) * MAX_SGLIST, GFP_ATOMIC | GFP_DMA); +#endif + + if (megacfg->scbList[idx].sgList == NULL) { + printk (KERN_WARNING + "Can't allocate sglist for id %d\n", idx); + mega_freeSgList (megacfg); + return -1; + } +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + megacfg->scbList[idx].pthru = pci_alloc_consistent (megacfg->dev, + sizeof (mega_passthru), + &(megacfg->scbList[idx]. + dma_passthruhandle64)); + + if (megacfg->scbList[idx].pthru == NULL) { + printk (KERN_WARNING + "Can't allocate passthru for id %d\n", idx); + } + + megacfg->scbList[idx].epthru = + pci_alloc_consistent( + megacfg->dev, sizeof(mega_ext_passthru), + &(megacfg->scbList[idx].dma_ext_passthruhandle64) + ); + + if (megacfg->scbList[idx].epthru == NULL) { + printk (KERN_WARNING + "Can't allocate extended passthru for id %d\n", idx); + } + /* + * Allocate a 256 Byte Bounce Buffer for handling INQ/RD_CAPA + */ + megacfg->scbList[idx].bounce_buffer = pci_alloc_consistent (megacfg->dev, + 256, + &(megacfg->scbList[idx]. + dma_bounce_buffer)); + + if (!megacfg->scbList[idx].bounce_buffer) + printk + ("megaraid: allocation for bounce buffer failed\n"); + + megacfg->scbList[idx].dma_type = M_RD_DMA_TYPE_NONE; +#endif + + if (idx < MAX_COMMANDS) { + /* + * Link to free list + * lock not required since we are loading the driver, so no + * commands possible right now. + */ + enq_scb_freelist (megacfg, &megacfg->scbList[idx], + NO_LOCK, INTR_ENB); + + } + } + + return 0; +} + +/* + * Enqueues a SCB + */ +static void enq_scb_freelist (mega_host_config * megacfg, mega_scb * scb, int lock, + int intr) +{ + + if (lock == INTERNAL_LOCK || intr == INTR_DIS) { + if (intr == INTR_DIS) + spin_lock_irq (&megacfg->lock_free); + else + spin_lock (&megacfg->lock_free); + } + + scb->state = SCB_FREE; + scb->SCpnt = NULL; + + if (megacfg->qFreeH == (mega_scb *) NULL) { + megacfg->qFreeH = megacfg->qFreeT = scb; + } else { + megacfg->qFreeT->next = scb; + megacfg->qFreeT = scb; + } + + megacfg->qFreeT->next = NULL; + megacfg->qFcnt++; + + if (lock == INTERNAL_LOCK || intr == INTR_DIS) { + if (intr == INTR_DIS) + spin_unlock_irq (&megacfg->lock_free); + else + spin_unlock (&megacfg->lock_free); + } +} + +/* + * Routines for the character/ioctl interface to the driver + */ +static int megadev_open (struct inode *inode, struct file *filep) +{ + MOD_INC_USE_COUNT; + return 0; /* success */ +} + +static int megadev_ioctl_entry (struct inode *inode, struct file *filep, + unsigned int cmd, unsigned long arg) +{ + int ret = -1; + + /* + * We do not allow parallel ioctls to the driver as of now. + */ + down (&mimd_entry_mtx); + ret = megadev_ioctl (inode, filep, cmd, arg); + up (&mimd_entry_mtx); + + return ret; + +} + +static int megadev_ioctl (struct inode *inode, struct file *filep, + unsigned int cmd, unsigned long arg) +{ + int adapno; + kdev_t dev; + u32 inlen; + struct uioctl_t ioc; + char *kvaddr = NULL; + int nadap = numCtlrs; + u8 opcode; + u32 outlen; + int ret; + u8 subopcode; + Scsi_Cmnd *scsicmd; + struct Scsi_Host *shpnt; + char *uaddr; + struct uioctl_t *uioc; + dma_addr_t dma_addr; + u32 length; + mega_host_config *megacfg = NULL; +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) /* 0x020400 */ + struct pci_dev pdev; + struct pci_dev *pdevp = &pdev; +#else + char *pdevp = NULL; +#endif + IO_LOCK_T; + + if (!inode || !(dev = inode->i_rdev)) + return -EINVAL; + + if (_IOC_TYPE (cmd) != MEGAIOC_MAGIC) + return (-EINVAL); + + /* + * Get the user ioctl structure + */ + ret = verify_area (VERIFY_WRITE, (char *) arg, sizeof (struct uioctl_t)); + + if (ret) + return ret; + + if(copy_from_user (&ioc, (char *) arg, sizeof (struct uioctl_t))) + return -EFAULT; + + /* + * The first call the applications should make is to find out the + * number of controllers in the system. The next logical call should + * be for getting the list of controllers in the system as detected + * by the driver. + */ + + /* + * Get the opcode and subopcode for the commands + */ + opcode = ioc.ui.fcs.opcode; + subopcode = ioc.ui.fcs.subopcode; + + switch (opcode) { + case M_RD_DRIVER_IOCTL_INTERFACE: + switch (subopcode) { + case MEGAIOC_QDRVRVER: /* Query driver version */ + put_user (driver_ver, (u32 *) ioc.data); + return 0; + + case MEGAIOC_QNADAP: /* Get # of adapters */ + put_user (nadap, (int *) ioc.data); + return nadap; + + case MEGAIOC_QADAPINFO: /* Get adapter information */ + /* + * which adapter? + */ + adapno = ioc.ui.fcs.adapno; + + /* + * The adapter numbers do not start with 0, at least in + * the user space. This is just to make sure, 0 is not the + * default value which will refer to adapter 1. So the + * user needs to make use of macros MKADAP() and GETADAP() + * (See megaraid.h) while making ioctl() call. + */ + adapno = GETADAP (adapno); + + if (adapno >= numCtlrs) + return (-ENODEV); + + ret = verify_area (VERIFY_WRITE, + ioc.data, + sizeof (struct mcontroller)); + if (ret) + return ret; + + /* + * Copy struct mcontroller to user area + */ + copy_to_user (ioc.data, + mcontroller + adapno, + sizeof (struct mcontroller)); + return 0; + + default: + return (-EINVAL); + + } /* inner switch */ + break; + + case M_RD_IOCTL_CMD_NEW: + /* which adapter? */ + adapno = ioc.ui.fcs.adapno; + + /* See comment above: MEGAIOC_QADAPINFO */ + adapno = GETADAP(adapno); + + if (adapno >= numCtlrs) + return(-ENODEV); + + length = ioc.ui.fcs.length; + + /* Check for zero length buffer or very large buffers */ + if( !length || length > 32*1024 ) + return -EINVAL; + + /* save the user address */ + uaddr = ioc.ui.fcs.buffer; + + /* + * For M_RD_IOCTL_CMD_NEW commands, the fields outlen and inlen of + * uioctl_t structure are treated as flags. If outlen is 1, the + * data is transferred from the device and if inlen is 1, the data + * is transferred to the device. + */ + outlen = ioc.outlen; + inlen = ioc.inlen; + + if(outlen) { + ret = verify_area(VERIFY_WRITE, (char *)ioc.ui.fcs.buffer, length); + if (ret) return ret; + } + if(inlen) { + ret = verify_area(VERIFY_READ, (char *) ioc.ui.fcs.buffer, length); + if (ret) return ret; + } + + /* + * Find this host + */ + for( shpnt = scsi_hostlist; shpnt; shpnt = shpnt->next ) { + if( shpnt->hostdata == (unsigned long *)megaCtlrs[adapno] ) { + megacfg = (mega_host_config *)shpnt->hostdata; + break; + } + } + if(shpnt == NULL) return -ENODEV; + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + scsicmd = (Scsi_Cmnd *)kmalloc(sizeof(Scsi_Cmnd), GFP_KERNEL|GFP_DMA); +#else + scsicmd = (Scsi_Cmnd *)scsi_init_malloc(sizeof(Scsi_Cmnd), + GFP_ATOMIC | GFP_DMA); +#endif + if(scsicmd == NULL) return -ENOMEM; + + memset(scsicmd, 0, sizeof(Scsi_Cmnd)); + scsicmd->host = shpnt; + + if( outlen || inlen ) { +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + pdevp = &pdev; + memcpy(pdevp, megacfg->dev, sizeof(struct pci_dev)); + pdevp->dma_mask = 0xffffffff; +#else + pdevp = NULL; +#endif + kvaddr = dma_alloc_consistent(pdevp, length, &dma_addr); + + if( kvaddr == NULL ) { + printk(KERN_WARNING "megaraid:allocation failed\n"); +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) /*0x20400 */ + kfree(scsicmd); +#else + scsi_init_free((char *)scsicmd, sizeof(Scsi_Cmnd)); +#endif + return -ENOMEM; + } + + ioc.ui.fcs.buffer = kvaddr; + + if (inlen) { + /* copyin the user data */ + copy_from_user(kvaddr, (char *)uaddr, length ); + } + } + + scsicmd->cmnd[0] = MEGADEVIOC; + scsicmd->request_buffer = (void *)&ioc; + + init_MUTEX_LOCKED(&mimd_ioctl_sem); + + IO_LOCK; + megaraid_queue(scsicmd, megadev_ioctl_done); + + IO_UNLOCK; + + down(&mimd_ioctl_sem); + + if( !scsicmd->result && outlen ) { + copy_to_user(uaddr, kvaddr, length); + } + + /* + * copyout the result + */ + uioc = (struct uioctl_t *)arg; + + if( ioc.mbox[0] == MEGA_MBOXCMD_PASSTHRU ) { + put_user( scsicmd->result, &uioc->pthru.scsistatus ); + } else { + put_user(1, &uioc->mbox[16]); /* numstatus */ + /* status */ + put_user (scsicmd->result, &uioc->mbox[17]); + } + + if (kvaddr) { + dma_free_consistent(pdevp, length, kvaddr, dma_addr); + } +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) /*0x20400 */ + kfree (scsicmd); +#else + scsi_init_free((char *)scsicmd, sizeof(Scsi_Cmnd)); +#endif + + /* restore the user address */ + ioc.ui.fcs.buffer = uaddr; + + return ret; + + case M_RD_IOCTL_CMD: + /* which adapter? */ + adapno = ioc.ui.fcs.adapno; + + /* See comment above: MEGAIOC_QADAPINFO */ + adapno = GETADAP (adapno); + + if (adapno >= numCtlrs) + return (-ENODEV); + + /* save the user address */ + uaddr = ioc.data; + outlen = ioc.outlen; + inlen = ioc.inlen; + + if ((outlen >= IOCTL_MAX_DATALEN) || (inlen >= IOCTL_MAX_DATALEN)) + return (-EINVAL); + + if (outlen) { + ret = verify_area (VERIFY_WRITE, ioc.data, outlen); + if (ret) return ret; + } + if (inlen) { + ret = verify_area (VERIFY_READ, ioc.data, inlen); + if (ret) return ret; + } + + /* + * Find this host + */ + for( shpnt = scsi_hostlist; shpnt; shpnt = shpnt->next ) { + if( shpnt->hostdata == (unsigned long *)megaCtlrs[adapno] ) { + megacfg = (mega_host_config *)shpnt->hostdata; + break; + } + } + if(shpnt == NULL) return -ENODEV; + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + scsicmd = (Scsi_Cmnd *)kmalloc(sizeof(Scsi_Cmnd), GFP_KERNEL|GFP_DMA); +#else + scsicmd = (Scsi_Cmnd *)scsi_init_malloc(sizeof(Scsi_Cmnd), + GFP_ATOMIC | GFP_DMA); +#endif + if(scsicmd == NULL) return -ENOMEM; + + memset(scsicmd, 0, sizeof(Scsi_Cmnd)); + scsicmd->host = shpnt; + + if (outlen || inlen) { +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + pdevp = &pdev; + memcpy(pdevp, megacfg->dev, sizeof(struct pci_dev)); + pdevp->dma_mask = 0xffffffff; +#else + pdevp = NULL; +#endif + /* + * Allocate a page of kernel space. + */ + kvaddr = dma_alloc_consistent(pdevp, PAGE_SIZE, &dma_addr); + + if( kvaddr == NULL ) { + printk (KERN_WARNING "megaraid:allocation failed\n"); +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) /*0x20400 */ + kfree(scsicmd); +#else + scsi_init_free((char *)scsicmd, sizeof(Scsi_Cmnd)); +#endif + return -ENOMEM; + } + + ioc.data = kvaddr; + + if (inlen) { + if (ioc.mbox[0] == MEGA_MBOXCMD_PASSTHRU) { + /* copyin the user data */ + copy_from_user (kvaddr, uaddr, ioc.pthru.dataxferlen); + } else { + copy_from_user (kvaddr, uaddr, inlen); + } + } + } + + scsicmd->cmnd[0] = MEGADEVIOC; + scsicmd->request_buffer = (void *) &ioc; + + init_MUTEX_LOCKED (&mimd_ioctl_sem); + + IO_LOCK; + megaraid_queue (scsicmd, megadev_ioctl_done); + + IO_UNLOCK; + down (&mimd_ioctl_sem); + + if (!scsicmd->result && outlen) { + if (ioc.mbox[0] == MEGA_MBOXCMD_PASSTHRU) { + copy_to_user (uaddr, kvaddr, ioc.pthru.dataxferlen); + } else { + copy_to_user (uaddr, kvaddr, outlen); + } + } + + /* + * copyout the result + */ + uioc = (struct uioctl_t *) arg; + + if (ioc.mbox[0] == MEGA_MBOXCMD_PASSTHRU) { + put_user (scsicmd->result, &uioc->pthru.scsistatus); + } else { + put_user (1, &uioc->mbox[16]); /* numstatus */ + /* status */ + put_user (scsicmd->result, &uioc->mbox[17]); + } + + if (kvaddr) { + dma_free_consistent(pdevp, PAGE_SIZE, kvaddr, dma_addr ); + } + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + kfree (scsicmd); +#else + scsi_init_free((char *)scsicmd, sizeof(Scsi_Cmnd)); +#endif + + /* restore user pointer */ + ioc.data = uaddr; + + return ret; + + default: + return (-EINVAL); + + }/* Outer switch */ + + return 0; +} + +static void +megadev_ioctl_done(Scsi_Cmnd *sc) +{ + up (&mimd_ioctl_sem); +} + +static mega_scb * +megadev_doioctl (mega_host_config * megacfg, Scsi_Cmnd * sc) +{ + u8 cmd; + struct uioctl_t *ioc = NULL; + mega_mailbox *mbox = NULL; + mega_ioctl_mbox *mboxioc = NULL; + struct mbox_passthru *mboxpthru = NULL; + mega_scb *scb = NULL; + mega_passthru *pthru = NULL; + + if ((scb = mega_allocateSCB (megacfg, sc)) == NULL) { + sc->result = (DID_ERROR << 16); + callDone (sc); + return NULL; + } + + ioc = (struct uioctl_t *) sc->request_buffer; + + memcpy (scb->mboxData, ioc->mbox, sizeof (scb->mboxData)); + + /* The generic mailbox */ + mbox = (mega_mailbox *) ioc->mbox; + + /* + * Get the user command + */ + cmd = ioc->mbox[0]; + + switch (cmd) { + case MEGA_MBOXCMD_PASSTHRU: + /* + * prepare the SCB with information from the user ioctl structure + */ +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + pthru = scb->pthru; +#else + pthru = &scb->pthru; +#endif + memcpy (pthru, &ioc->pthru, sizeof (mega_passthru)); + mboxpthru = (struct mbox_passthru *) scb->mboxData; +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) + if (megacfg->flag & BOARD_64BIT) { + /* This is just a sample with one element + * This if executes onlu on 2.4 kernels + */ + mboxpthru->dataxferaddr = scb->dma_passthruhandle64; + scb->sg64List[0].address = + pci_map_single (megacfg->dev, + ioc->data, + 4096, PCI_DMA_BIDIRECTIONAL); + scb->sg64List[0].length = 4096; // TODO: Check this + pthru->dataxferaddr = scb->dma_sghandle64; + pthru->numsgelements = 1; + mboxpthru->cmd = 0xC3; + } else { + mboxpthru->dataxferaddr = scb->dma_passthruhandle64; + pthru->dataxferaddr = + pci_map_single (megacfg->dev, + ioc->data, + 4096, PCI_DMA_BIDIRECTIONAL); + pthru->numsgelements = 0; + } + +#else + { + mboxpthru->dataxferaddr = virt_to_bus (&scb->pthru); + pthru->dataxferaddr = virt_to_bus (ioc->data); + pthru->numsgelements = 0; + } +#endif + + pthru->reqsenselen = 14; + break; + + default: /* Normal command */ + mboxioc = (mega_ioctl_mbox *) scb->mboxData; + + if (ioc->ui.fcs.opcode == M_RD_IOCTL_CMD_NEW) { + scb->buff_ptr = ioc->ui.fcs.buffer; + scb->iDataSize = ioc->ui.fcs.length; + } else { + scb->buff_ptr = ioc->data; + scb->iDataSize = 4096; // TODO:check it + } + + set_mbox_xfer_addr (megacfg, scb, mboxioc, FROMTO_DEVICE); + mboxioc->numsgelements = 0; + break; + } + + return scb; +} + +static int +megadev_close (struct inode *inode, struct file *filep) +{ +#ifdef MODULE + MOD_DEC_USE_COUNT; +#endif + return 0; +} + + +static int +mega_support_ext_cdb(mega_host_config *this_hba) +{ + mega_mailbox *mboxpnt; + unsigned char mbox[16]; + int ret; + + mboxpnt = (mega_mailbox *) mbox; + + memset(mbox, 0, sizeof (mbox)); + /* + * issue command to find out if controller supports extended CDBs. + */ + mbox[0] = 0xA4; + mbox[2] = 0x16; + + ret = megaIssueCmd(this_hba, mbox, NULL, 0); + + return !ret; +} + + +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,4,0) +void * +dma_alloc_consistent(void *dev, size_t size, dma_addr_t *dma_addr) +{ + void *_tv; + int npages; + int order = 0; + + /* + * How many pages application needs + */ + npages = size / PAGE_SIZE; + + /* Do we need one more page */ + if(size % PAGE_SIZE) + npages++; + + order = mega_get_order(npages); + + _tv = (void *)__get_free_pages(GFP_DMA, order); + + if( _tv != NULL ) { + memset(_tv, 0, size); + *(dma_addr) = virt_to_bus(_tv); + } + + return _tv; +} + +/* + * int mega_get_order(int) + * + * returns the order to be used as 2nd argument to __get_free_pages() - which + * return pages equal to pow(2, order) - AM + */ +int +mega_get_order(int n) +{ + int i = 0; + + while( pow_2(i++) < n ) + ; /* null statement */ + + return i-1; +} + +/* + * int pow_2(int) + * + * calculates pow(2, i) + */ +int +pow_2(int i) +{ + unsigned int v = 1; + + while(i--) + v <<= 1; + + return v; +} + +void +dma_free_consistent(void *dev, size_t size, void *vaddr, dma_addr_t dma_addr) +{ + int npages; + int order = 0; + + npages = size / PAGE_SIZE; + + if(size % PAGE_SIZE) + npages++; + + if (npages == 1) + order = 0; + else if (npages == 2) + order = 1; + else if (npages <= 4) + order = 2; + else + order = 3; + + free_pages((unsigned long)vaddr, order); + +} +#endif + +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) +static +#endif /* LINUX VERSION 2.4.XX */ +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,0) || defined(MODULE) +Scsi_Host_Template driver_template = MEGARAID; + +#include "scsi_module.c" +#endif /* LINUX VERSION 2.4.XX || MODULE */ + +/* vi: set ts=4: */ diff -urpN linux-2.4.9-linus/drivers/scsi/osst_options.h linux-2.4.9-larpage/drivers/scsi/osst_options.h --- linux-2.4.9-linus/drivers/scsi/osst_options.h 2001-06-11 19:15:27.000000000 -0700 +++ linux-2.4.9-larpage/drivers/scsi/osst_options.h 2002-11-20 02:02:53.000000000 -0800 @@ -48,11 +48,11 @@ /* Maximum number of scatter/gather segments */ /* Fit one buffer in pages and add one for the AUX header */ -#define OSST_MAX_SG (((OSST_BUFFER_BLOCKS*1024) / PAGE_SIZE) + 1) +#define OSST_MAX_SG (((OSST_BUFFER_BLOCKS*1024 + PAGE_SIZE - 1) / PAGE_SIZE) + 1) /* The number of scatter/gather segments to allocate at first try (must be smaller or equal to the maximum). */ -#define OSST_FIRST_SG ((OSST_BUFFER_BLOCKS*1024) / PAGE_SIZE) +#define OSST_FIRST_SG ((OSST_BUFFER_BLOCKS*1024 + PAGE_SIZE - 1) / PAGE_SIZE) /* The size of the first scatter/gather segments (determines the maximum block size for SCSI adapters not supporting scatter/gather). The default is set diff -urpN linux-2.4.9-linus/drivers/scsi/scsi_dma.c linux-2.4.9-larpage/drivers/scsi/scsi_dma.c --- linux-2.4.9-linus/drivers/scsi/scsi_dma.c 2000-09-05 14:08:55.000000000 -0700 +++ linux-2.4.9-larpage/drivers/scsi/scsi_dma.c 2002-11-20 02:02:55.000000000 -0800 @@ -22,8 +22,13 @@ /* * PAGE_SIZE must be a multiple of the sector size (512). True * for all reasonably recent architectures (even the VAX...). + * + * If PAGE_SIZE is greater than 16kB, allocate and free in larger units, + * still calling them "sectors" here, though now multiples of 512 bytes. */ -#define SECTOR_SIZE 512 +#define SECTOR_SUBSHIFT ((PAGE_SHIFT>14)? (PAGE_SHIFT-14): 0) +#define SECTOR_SHIFT (9+SECTOR_SUBSHIFT) +#define SECTOR_SIZE (1< PAGE_SIZE) + + if (len > PAGE_SIZE) return NULL; - nbits = len >> 9; + nbits = (len + SECTOR_SIZE - 1) >> SECTOR_SHIFT; mask = (1 << nbits) - 1; spin_lock_irqsave(&allocator_request_lock, flags); @@ -87,13 +92,13 @@ void *scsi_malloc(unsigned int len) for (j = 0; j <= SECTORS_PER_PAGE - nbits; j++) { if ((dma_malloc_freelist[i] & (mask << j)) == 0) { dma_malloc_freelist[i] |= (mask << j); - scsi_dma_free_sectors -= nbits; + scsi_dma_free_sectors -= (nbits << SECTOR_SUBSHIFT); #ifdef DEBUG - SCSI_LOG_MLQUEUE(3, printk("SMalloc: %d %p [From:%p]\n", len, dma_malloc_pages[i] + (j << 9))); - printk("SMalloc: %d %p [From:%p]\n", len, dma_malloc_pages[i] + (j << 9)); + SCSI_LOG_MLQUEUE(3, printk("SMalloc: %d %p [From:%p]\n", len, dma_malloc_pages[i] + (j << SECTOR_SHIFT))); + printk("SMalloc: %d %p [From:%p]\n", len, dma_malloc_pages[i] + (j << SECTOR_SHIFT)); #endif spin_unlock_irqrestore(&allocator_request_lock, flags); - return (void *) ((unsigned long) dma_malloc_pages[i] + (j << 9)); + return (void *) ((unsigned long) dma_malloc_pages[i] + (j << SECTOR_SHIFT)); } } spin_unlock_irqrestore(&allocator_request_lock, flags); @@ -142,9 +147,9 @@ int scsi_free(void *obj, unsigned int le unsigned long page_addr = (unsigned long) dma_malloc_pages[page]; if ((unsigned long) obj >= page_addr && (unsigned long) obj < page_addr + PAGE_SIZE) { - sector = (((unsigned long) obj) - page_addr) >> 9; + sector = (((unsigned long) obj) - page_addr) >> SECTOR_SHIFT; - nbits = len >> 9; + nbits = (len + SECTOR_SIZE - 1) >> SECTOR_SHIFT; mask = (1 << nbits) - 1; if (sector + nbits > SECTORS_PER_PAGE) @@ -158,7 +163,7 @@ int scsi_free(void *obj, unsigned int le #endif panic("scsi_free:Trying to free unused memory"); } - scsi_dma_free_sectors += nbits; + scsi_dma_free_sectors += (nbits << SECTOR_SUBSHIFT); dma_malloc_freelist[page] &= ~(mask << sector); spin_unlock_irqrestore(&allocator_request_lock, flags); return 0; @@ -205,8 +210,10 @@ void scsi_resize_dma_pool(void) /* * Free up the DMA pool. */ - if (scsi_dma_free_sectors != dma_sectors) - panic("SCSI DMA pool memory leak %d %d\n", scsi_dma_free_sectors, dma_sectors); + if (scsi_dma_free_sectors != (dma_sectors << SECTOR_SUBSHIFT)) + panic("SCSI DMA pool memory leak %d %d\n", + scsi_dma_free_sectors, + dma_sectors << SECTOR_SUBSHIFT); for (i = 0; i < dma_sectors / SECTORS_PER_PAGE; i++) free_pages((unsigned long) dma_malloc_pages[i], 0); @@ -254,27 +261,27 @@ void scsi_resize_dma_pool(void) if (nents < 64) nents = 64; #endif new_dma_sectors += ((nents * - sizeof(struct scatterlist) + 511) >> 9) * + sizeof(struct scatterlist) + SECTOR_SIZE - 1) >> SECTOR_SHIFT) * SDpnt->queue_depth; if (SDpnt->type == TYPE_WORM || SDpnt->type == TYPE_ROM) - new_dma_sectors += (2048 >> 9) * SDpnt->queue_depth; + new_dma_sectors += (2048 >> SECTOR_SHIFT) * SDpnt->queue_depth; } else if (SDpnt->type == TYPE_SCANNER || SDpnt->type == TYPE_PROCESSOR || SDpnt->type == TYPE_COMM || SDpnt->type == TYPE_MEDIUM_CHANGER || SDpnt->type == TYPE_ENCLOSURE) { - new_dma_sectors += (4096 >> 9) * SDpnt->queue_depth; + new_dma_sectors += (4096 >> SECTOR_SHIFT) * SDpnt->queue_depth; } else { if (SDpnt->type != TYPE_TAPE) { printk("resize_dma_pool: unknown device type %d\n", SDpnt->type); - new_dma_sectors += (4096 >> 9) * SDpnt->queue_depth; + new_dma_sectors += (4096 >> SECTOR_SHIFT) * SDpnt->queue_depth; } } if (host->unchecked_isa_dma && need_isa_bounce_buffers && SDpnt->type != TYPE_TAPE) { - new_dma_sectors += (PAGE_SIZE >> 9) * host->sg_tablesize * + new_dma_sectors += (PAGE_SIZE >> SECTOR_SHIFT) * host->sg_tablesize * SDpnt->queue_depth; new_need_isa_buffer++; } @@ -282,11 +289,14 @@ void scsi_resize_dma_pool(void) } #ifdef DEBUG_INIT - printk("resize_dma_pool: needed dma sectors = %d\n", new_dma_sectors); + printk("resize_dma_pool: needed dma sectors = %d\n", new_dma_sectors << SECTOR_SUBSHIFT); #endif /* limit DMA memory to 32MB: */ - new_dma_sectors = (new_dma_sectors + 15) & 0xfff0; + new_dma_sectors = (new_dma_sectors + SECTORS_PER_PAGE-1) & + ~(SECTORS_PER_PAGE-1); + if (new_dma_sectors > (32*1024*1024 >> SECTOR_SHIFT)) + new_dma_sectors = (32*1024*1024 >> SECTOR_SHIFT); /* * We never shrink the buffers - this leads to @@ -342,12 +352,15 @@ void scsi_resize_dma_pool(void) } } if (out_of_space) { /* try scaling down new_dma_sectors request */ - printk("scsi::resize_dma_pool: WARNING, dma_sectors=%u, " - "wanted=%u, scaling\n", dma_sectors, new_dma_sectors); + printk("scsi::resize_dma_pool: WARNING, " + "dma_sectors=%u, wanted=%u, scaling\n", + dma_sectors << SECTOR_SUBSHIFT, + new_dma_sectors << SECTOR_SUBSHIFT); if (new_dma_sectors < (8 * SECTORS_PER_PAGE)) break; /* pretty well hopeless ... */ new_dma_sectors = (new_dma_sectors * 3) / 4; - new_dma_sectors = (new_dma_sectors + 15) & 0xfff0; + new_dma_sectors = (new_dma_sectors + SECTORS_PER_PAGE-1) & + ~(SECTORS_PER_PAGE-1); if (new_dma_sectors <= dma_sectors) break; /* stick with what we have got */ } else @@ -374,7 +387,7 @@ void scsi_resize_dma_pool(void) memcpy(new_dma_malloc_pages, dma_malloc_pages, size); kfree((char *) dma_malloc_pages); } - scsi_dma_free_sectors += new_dma_sectors - dma_sectors; + scsi_dma_free_sectors += (new_dma_sectors - dma_sectors) << SECTOR_SUBSHIFT; dma_malloc_pages = new_dma_malloc_pages; dma_sectors = new_dma_sectors; scsi_need_isa_buffer = new_need_isa_buffer; @@ -383,7 +396,7 @@ void scsi_resize_dma_pool(void) #ifdef DEBUG_INIT printk("resize_dma_pool: dma free sectors = %d\n", scsi_dma_free_sectors); - printk("resize_dma_pool: dma sectors = %d\n", dma_sectors); + printk("resize_dma_pool: dma sectors = %d\n", dma_sectors << SECTOR_SUBSHIFT); printk("resize_dma_pool: need isa buffers = %d\n", scsi_need_isa_buffer); #endif } @@ -410,7 +423,7 @@ int scsi_init_minimal_dma_pool(void) spin_lock_irqsave(&allocator_request_lock, flags); dma_sectors = PAGE_SIZE / SECTOR_SIZE; - scsi_dma_free_sectors = dma_sectors; + scsi_dma_free_sectors = (dma_sectors << SECTOR_SUBSHIFT); /* * Set up a minimal DMA buffer list - this will be used during scan_scsis * in some cases. diff -urpN linux-2.4.9-linus/drivers/scsi/scsi_dma.c.orig linux-2.4.9-larpage/drivers/scsi/scsi_dma.c.orig --- linux-2.4.9-linus/drivers/scsi/scsi_dma.c.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/drivers/scsi/scsi_dma.c.orig 2002-11-20 02:02:55.000000000 -0800 @@ -0,0 +1,463 @@ +/* + * scsi_dma.c Copyright (C) 2000 Eric Youngdale + * + * mid-level SCSI DMA bounce buffer allocator + * + */ + +#define __NO_VERSION__ +#include +#include +#include + + +#include "scsi.h" +#include "hosts.h" +#include "constants.h" + +#ifdef CONFIG_KMOD +#include +#endif + +/* + * PAGE_SIZE must be a multiple of the sector size (512). True + * for all reasonably recent architectures (even the VAX...). + * + * If PAGE_SIZE is greater than 16kB, allocate and free in larger units, + * still calling them "sectors" here, though now multiples of 512 bytes. + */ +#define SECTOR_SUBSHIFT ((PAGE_SHIFT>14)? (PAGE_SHIFT-14): 0) +#define SECTOR_SHIFT (9+SECTOR_SUBSHIFT) +#define SECTOR_SIZE (1< PAGE_SIZE) + return NULL; + + nbits = (len + SECTOR_SIZE - 1) >> SECTOR_SHIFT; + mask = (1 << nbits) - 1; + + spin_lock_irqsave(&allocator_request_lock, flags); + + for (i = 0; i < dma_sectors / SECTORS_PER_PAGE; i++) + for (j = 0; j <= SECTORS_PER_PAGE - nbits; j++) { + if ((dma_malloc_freelist[i] & (mask << j)) == 0) { + dma_malloc_freelist[i] |= (mask << j); + scsi_dma_free_sectors -= (nbits << SECTOR_SUBSHIFT); +#ifdef DEBUG + SCSI_LOG_MLQUEUE(3, printk("SMalloc: %d %p [From:%p]\n", len, dma_malloc_pages[i] + (j << SECTOR_SHIFT))); + printk("SMalloc: %d %p [From:%p]\n", len, dma_malloc_pages[i] + (j << SECTOR_SHIFT)); +#endif + spin_unlock_irqrestore(&allocator_request_lock, flags); + return (void *) ((unsigned long) dma_malloc_pages[i] + (j << SECTOR_SHIFT)); + } + } + spin_unlock_irqrestore(&allocator_request_lock, flags); + return NULL; /* Nope. No more */ +} + +/* + * Function: scsi_free + * + * Purpose: Free memory into the DMA-safe pool. + * + * Arguments: ptr - data block we are freeing. + * len - size of block we are freeing. + * + * Lock status: No locks assumed to be held. This function is SMP-safe. + * + * Returns: Nothing + * + * Notes: This function *must* only be used to free memory + * allocated from scsi_malloc(). + * + * Prior to the new queue code, this function was not SMP-safe. + * This function can only allocate in units of sectors + * (i.e. 512 bytes). + */ +int scsi_free(void *obj, unsigned int len) +{ + unsigned int page, sector, nbits, mask; + unsigned long flags; + +#ifdef DEBUG + unsigned long ret = 0; + +#ifdef __mips__ + __asm__ __volatile__("move\t%0,$31":"=r"(ret)); +#else + ret = __builtin_return_address(0); +#endif + printk("scsi_free %p %d\n", obj, len); + SCSI_LOG_MLQUEUE(3, printk("SFree: %p %d\n", obj, len)); +#endif + + spin_lock_irqsave(&allocator_request_lock, flags); + + for (page = 0; page < dma_sectors / SECTORS_PER_PAGE; page++) { + unsigned long page_addr = (unsigned long) dma_malloc_pages[page]; + if ((unsigned long) obj >= page_addr && + (unsigned long) obj < page_addr + PAGE_SIZE) { + sector = (((unsigned long) obj) - page_addr) >> SECTOR_SHIFT; + + nbits = (len + SECTOR_SIZE - 1) >> SECTOR_SHIFT; + mask = (1 << nbits) - 1; + + if (sector + nbits > SECTORS_PER_PAGE) + panic("scsi_free:Bad memory alignment"); + + if ((dma_malloc_freelist[page] & + (mask << sector)) != (mask << sector)) { +#ifdef DEBUG + printk("scsi_free(obj=%p, len=%d) called from %08lx\n", + obj, len, ret); +#endif + panic("scsi_free:Trying to free unused memory"); + } + scsi_dma_free_sectors += (nbits << SECTOR_SUBSHIFT); + dma_malloc_freelist[page] &= ~(mask << sector); + spin_unlock_irqrestore(&allocator_request_lock, flags); + return 0; + } + } + panic("scsi_free:Bad offset"); +} + + +/* + * Function: scsi_resize_dma_pool + * + * Purpose: Ensure that the DMA pool is sufficiently large to be + * able to guarantee that we can always process I/O requests + * without calling the system allocator. + * + * Arguments: None. + * + * Lock status: No locks assumed to be held. This function is SMP-safe. + * + * Returns: Nothing + * + * Notes: Prior to the new queue code, this function was not SMP-safe. + * Go through the device list and recompute the most appropriate + * size for the dma pool. Then grab more memory (as required). + */ +void scsi_resize_dma_pool(void) +{ + int i, k; + unsigned long size; + unsigned long flags; + struct Scsi_Host *shpnt; + struct Scsi_Host *host = NULL; + Scsi_Device *SDpnt; + FreeSectorBitmap *new_dma_malloc_freelist = NULL; + unsigned int new_dma_sectors = 0; + unsigned int new_need_isa_buffer = 0; + unsigned char **new_dma_malloc_pages = NULL; + int out_of_space = 0; + + spin_lock_irqsave(&allocator_request_lock, flags); + + if (!scsi_hostlist) { + /* + * Free up the DMA pool. + */ + if (scsi_dma_free_sectors != (dma_sectors << SECTOR_SUBSHIFT)) + panic("SCSI DMA pool memory leak %d %d\n", + scsi_dma_free_sectors, + dma_sectors << SECTOR_SUBSHIFT); + + for (i = 0; i < dma_sectors / SECTORS_PER_PAGE; i++) + free_pages((unsigned long) dma_malloc_pages[i], 0); + if (dma_malloc_pages) + kfree((char *) dma_malloc_pages); + dma_malloc_pages = NULL; + if (dma_malloc_freelist) + kfree((char *) dma_malloc_freelist); + dma_malloc_freelist = NULL; + dma_sectors = 0; + scsi_dma_free_sectors = 0; + spin_unlock_irqrestore(&allocator_request_lock, flags); + return; + } + /* Next, check to see if we need to extend the DMA buffer pool */ + + new_dma_sectors = 2 * SECTORS_PER_PAGE; /* Base value we use */ + + if (__pa(high_memory) - 1 > ISA_DMA_THRESHOLD) + need_isa_bounce_buffers = 1; + else + need_isa_bounce_buffers = 0; + + if (scsi_devicelist) + for (shpnt = scsi_hostlist; shpnt; shpnt = shpnt->next) + new_dma_sectors += SECTORS_PER_PAGE; /* Increment for each host */ + + for (host = scsi_hostlist; host; host = host->next) { + for (SDpnt = host->host_queue; SDpnt; SDpnt = SDpnt->next) { + /* + * sd and sr drivers allocate scatterlists. + * sr drivers may allocate for each command 1x2048 or 2x1024 extra + * buffers for 2k sector size and 1k fs. + * sg driver allocates buffers < 4k. + * st driver does not need buffers from the dma pool. + * estimate 4k buffer/command for devices of unknown type (should panic). + */ + if (SDpnt->type == TYPE_WORM || SDpnt->type == TYPE_ROM || + SDpnt->type == TYPE_DISK || SDpnt->type == TYPE_MOD) { + int nents = host->sg_tablesize; +#ifdef DMA_CHUNK_SIZE + /* If the architecture does DMA sg merging, make sure + we count with at least 64 entries even for HBAs + which handle very few sg entries. */ + if (nents < 64) nents = 64; +#endif + new_dma_sectors += ((nents * + sizeof(struct scatterlist) + SECTOR_SIZE - 1) >> SECTOR_SHIFT) * + SDpnt->queue_depth; + if (SDpnt->type == TYPE_WORM || SDpnt->type == TYPE_ROM) + new_dma_sectors += (2048 >> SECTOR_SHIFT) * SDpnt->queue_depth; + } else if (SDpnt->type == TYPE_SCANNER || + SDpnt->type == TYPE_PROCESSOR || + SDpnt->type == TYPE_COMM || + SDpnt->type == TYPE_MEDIUM_CHANGER || + SDpnt->type == TYPE_ENCLOSURE) { + new_dma_sectors += (4096 >> SECTOR_SHIFT) * SDpnt->queue_depth; + } else { + if (SDpnt->type != TYPE_TAPE) { + printk("resize_dma_pool: unknown device type %d\n", SDpnt->type); + new_dma_sectors += (4096 >> SECTOR_SHIFT) * SDpnt->queue_depth; + } + } + + if (host->unchecked_isa_dma && + need_isa_bounce_buffers && + SDpnt->type != TYPE_TAPE) { + new_dma_sectors += (PAGE_SIZE >> SECTOR_SHIFT) * host->sg_tablesize * + SDpnt->queue_depth; + new_need_isa_buffer++; + } + } + } + +#ifdef DEBUG_INIT + printk("resize_dma_pool: needed dma sectors = %d\n", new_dma_sectors << SECTOR_SUBSHIFT); +#endif + + /* limit DMA memory to 32MB: */ + new_dma_sectors = (new_dma_sectors + SECTORS_PER_PAGE-1) & + ~(SECTORS_PER_PAGE-1); + if (new_dma_sectors > (32*1024*1024 >> SECTOR_SHIFT)) + new_dma_sectors = (32*1024*1024 >> SECTOR_SHIFT); + + /* + * We never shrink the buffers - this leads to + * race conditions that I would rather not even think + * about right now. + */ +#if 0 /* Why do this? No gain and risks out_of_space */ + if (new_dma_sectors < dma_sectors) + new_dma_sectors = dma_sectors; +#endif + if (new_dma_sectors <= dma_sectors) { + spin_unlock_irqrestore(&allocator_request_lock, flags); + return; /* best to quit while we are in front */ + } + + for (k = 0; k < 20; ++k) { /* just in case */ + out_of_space = 0; + size = (new_dma_sectors / SECTORS_PER_PAGE) * + sizeof(FreeSectorBitmap); + new_dma_malloc_freelist = (FreeSectorBitmap *) + kmalloc(size, GFP_ATOMIC); + if (new_dma_malloc_freelist) { + memset(new_dma_malloc_freelist, 0, size); + size = (new_dma_sectors / SECTORS_PER_PAGE) * + sizeof(*new_dma_malloc_pages); + new_dma_malloc_pages = (unsigned char **) + kmalloc(size, GFP_ATOMIC); + if (!new_dma_malloc_pages) { + size = (new_dma_sectors / SECTORS_PER_PAGE) * + sizeof(FreeSectorBitmap); + kfree((char *) new_dma_malloc_freelist); + out_of_space = 1; + } else { + memset(new_dma_malloc_pages, 0, size); + } + } else + out_of_space = 1; + + if ((!out_of_space) && (new_dma_sectors > dma_sectors)) { + for (i = dma_sectors / SECTORS_PER_PAGE; + i < new_dma_sectors / SECTORS_PER_PAGE; i++) { + new_dma_malloc_pages[i] = (unsigned char *) + __get_free_pages(GFP_ATOMIC | GFP_DMA, 0); + if (!new_dma_malloc_pages[i]) + break; + } + if (i != new_dma_sectors / SECTORS_PER_PAGE) { /* clean up */ + int k = i; + + out_of_space = 1; + for (i = 0; i < k; ++i) + free_pages((unsigned long) new_dma_malloc_pages[i], 0); + } + } + if (out_of_space) { /* try scaling down new_dma_sectors request */ + printk("scsi::resize_dma_pool: WARNING, " + "dma_sectors=%u, wanted=%u, scaling\n", + dma_sectors << SECTOR_SUBSHIFT, + new_dma_sectors << SECTOR_SUBSHIFT); + if (new_dma_sectors < (8 * SECTORS_PER_PAGE)) + break; /* pretty well hopeless ... */ + new_dma_sectors = (new_dma_sectors * 3) / 4; + new_dma_sectors = (new_dma_sectors + SECTORS_PER_PAGE-1) & + ~(SECTORS_PER_PAGE-1); + if (new_dma_sectors <= dma_sectors) + break; /* stick with what we have got */ + } else + break; /* found space ... */ + } /* end of for loop */ + if (out_of_space) { + spin_unlock_irqrestore(&allocator_request_lock, flags); + scsi_need_isa_buffer = new_need_isa_buffer; /* some useful info */ + printk(" WARNING, not enough memory, pool not expanded\n"); + return; + } + /* When we dick with the actual DMA list, we need to + * protect things + */ + if (dma_malloc_freelist) { + size = (dma_sectors / SECTORS_PER_PAGE) * sizeof(FreeSectorBitmap); + memcpy(new_dma_malloc_freelist, dma_malloc_freelist, size); + kfree((char *) dma_malloc_freelist); + } + dma_malloc_freelist = new_dma_malloc_freelist; + + if (dma_malloc_pages) { + size = (dma_sectors / SECTORS_PER_PAGE) * sizeof(*dma_malloc_pages); + memcpy(new_dma_malloc_pages, dma_malloc_pages, size); + kfree((char *) dma_malloc_pages); + } + scsi_dma_free_sectors += (new_dma_sectors - dma_sectors) << SECTOR_SUBSHIFT; + dma_malloc_pages = new_dma_malloc_pages; + dma_sectors = new_dma_sectors; + scsi_need_isa_buffer = new_need_isa_buffer; + + spin_unlock_irqrestore(&allocator_request_lock, flags); + +#ifdef DEBUG_INIT + printk("resize_dma_pool: dma free sectors = %d\n", scsi_dma_free_sectors); + printk("resize_dma_pool: dma sectors = %d\n", dma_sectors << SECTOR_SUBSHIFT); + printk("resize_dma_pool: need isa buffers = %d\n", scsi_need_isa_buffer); +#endif +} + +/* + * Function: scsi_init_minimal_dma_pool + * + * Purpose: Allocate a minimal (1-page) DMA pool. + * + * Arguments: None. + * + * Lock status: No locks assumed to be held. This function is SMP-safe. + * + * Returns: Nothing + * + * Notes: + */ +int scsi_init_minimal_dma_pool(void) +{ + unsigned long size; + unsigned long flags; + int has_space = 0; + + spin_lock_irqsave(&allocator_request_lock, flags); + + dma_sectors = PAGE_SIZE / SECTOR_SIZE; + scsi_dma_free_sectors = dma_sectors; + /* + * Set up a minimal DMA buffer list - this will be used during scan_scsis + * in some cases. + */ + + /* One bit per sector to indicate free/busy */ + size = (dma_sectors / SECTORS_PER_PAGE) * sizeof(FreeSectorBitmap); + dma_malloc_freelist = (FreeSectorBitmap *) + kmalloc(size, GFP_ATOMIC); + if (dma_malloc_freelist) { + memset(dma_malloc_freelist, 0, size); + /* One pointer per page for the page list */ + dma_malloc_pages = (unsigned char **) kmalloc( + (dma_sectors / SECTORS_PER_PAGE) * sizeof(*dma_malloc_pages), + GFP_ATOMIC); + if (dma_malloc_pages) { + memset(dma_malloc_pages, 0, size); + dma_malloc_pages[0] = (unsigned char *) + __get_free_pages(GFP_ATOMIC | GFP_DMA, 0); + if (dma_malloc_pages[0]) + has_space = 1; + } + } + if (!has_space) { + if (dma_malloc_freelist) { + kfree((char *) dma_malloc_freelist); + if (dma_malloc_pages) + kfree((char *) dma_malloc_pages); + } + spin_unlock_irqrestore(&allocator_request_lock, flags); + printk("scsi::init_module: failed, out of memory\n"); + return 1; + } + + spin_unlock_irqrestore(&allocator_request_lock, flags); + return 0; +} diff -urpN linux-2.4.9-linus/drivers/scsi/sun3_scsi.c linux-2.4.9-larpage/drivers/scsi/sun3_scsi.c --- linux-2.4.9-linus/drivers/scsi/sun3_scsi.c 2001-03-02 18:38:39.000000000 -0800 +++ linux-2.4.9-larpage/drivers/scsi/sun3_scsi.c 2002-11-20 02:02:55.000000000 -0800 @@ -218,7 +218,7 @@ int sun3scsi_detect(Scsi_Host_Template * if(!(iopte & SUN3_PAGE_TYPE_IO)) /* this an io page? */ continue; - if(((iopte & SUN3_PAGE_PGNUM_MASK) << PAGE_SHIFT) == + if(((iopte & SUN3_PAGE_PGNUM_MASK) << SUN3_PTE_SIZE_BITS) == IOBASE_SUN3_SCSI) { count = 1; break; diff -urpN linux-2.4.9-linus/drivers/sgi/char/graphics.c linux-2.4.9-larpage/drivers/sgi/char/graphics.c --- linux-2.4.9-linus/drivers/sgi/char/graphics.c 2001-03-19 12:35:09.000000000 -0800 +++ linux-2.4.9-larpage/drivers/sgi/char/graphics.c 2002-11-20 02:02:55.000000000 -0800 @@ -215,14 +215,13 @@ sgi_graphics_close (struct inode *inode, * This is the core of the direct rendering engine. */ -unsigned long +struct page * sgi_graphics_nopage (struct vm_area_struct *vma, unsigned long address, int no_share) { - pgd_t *pgd; pmd_t *pmd; pte_t *pte; int board = GRAPHICS_CARD (vma->vm_dentry->d_inode->i_rdev); - unsigned long virt_add, phys_add; + unsigned long phys_add; #ifdef DEBUG printk ("Got a page fault for board %d address=%lx guser=%lx\n", board, @@ -243,15 +242,8 @@ sgi_graphics_nopage (struct vm_area_stru /* Map the physical address of the newport registers into the address * space of this process */ - virt_add = address & PAGE_MASK; - phys_add = cards[board].g_regs + virt_add - vma->vm_start; - remap_page_range(virt_add, phys_add, PAGE_SIZE, vma->vm_page_prot); - - pgd = pgd_offset(current->mm, address); - pmd = pmd_offset(pgd, address); - pte = pte_offset(pmd, address); - printk("page: %08lx\n", pte_page(*pte)); - return pte_page(*pte); + phys_add = cards[board].g_regs + address - vma->vm_start; + return virt_to_page(__va(phys_add)); } /* diff -urpN linux-2.4.9-linus/drivers/sgi/char/graphics.c.orig linux-2.4.9-larpage/drivers/sgi/char/graphics.c.orig --- linux-2.4.9-linus/drivers/sgi/char/graphics.c.orig 1969-12-31 16:00:00.000000000 -0800 +++ linux-2.4.9-larpage/drivers/sgi/char/graphics.c.orig 2002-11-20 02:02:55.000000000 -0800 @@ -0,0 +1,376 @@ +/* $Id: graphics.c,v 1.22 2000/02/18 00:24:43 ralf Exp $ + * + * gfx.c: support for SGI's /dev/graphics, /dev/opengl + * + * Author: Miguel de Icaza (miguel@nuclecu.unam.mx) + * Ralf Baechle (ralf@gnu.org) + * Ulf Carlsson (ulfc@bun.falkenberg.se) + * + * On IRIX, /dev/graphics is [10, 146] + * /dev/opengl is [10, 147] + * + * From a mail with Mark J. Kilgard, /dev/opengl and /dev/graphics are + * the same thing, the use of /dev/graphics seems deprecated though. + * + * The reason that the original SGI programmer had to use only one + * device for all the graphic cards on the system will remain a + * mistery for the rest of our lives. Why some ioctls take a board + * number and some others not? Mistery. Why do they map the hardware + * registers into the user address space with an ioctl instead of + * mmap? Mistery too. Why they did not use the standard way of + * making ioctl constants and instead sticked a random constant? + * Mistery too. + * + * We implement those misterious things, and tried not to think about + * the reasons behind them. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "gconsole.h" +#include "graphics.h" +#include "usema.h" +#include +#include +#include +#include +#include